Dissection via Cryosection
The ability to distinguish different human
organs in cryosection images is important
for letting anatomy students
demonstrate how much they understand
human anatomic structure in different
perspectives. At the Sichuan Continuing
Education College of Medical Sciences
in China, Lin Yang has been teaching
her students the anatomic structure using
cryosection images since June 2004.
Using their learned anatomy knowledge,
the students found pixels belonging
to the same body part on different
cryosection images and filled these pixels
with a designated color. In time, they
had two sets of images: the original
cryosection images showing natural colors
and the colored images showing
each organ in a different designated
color. Both sets have a 3,072 2,048
pixel resolution, and each uncompressed
image uses roughly 18 Mbytes of storage
space. The thickness of the slices
corresponding to the cryosection images
varies from 0.1 mm to 1.0 mm.
We designed VHASS to extract two
sets of pixels from the colored images.
The first set includes the boundary
pixels for different organ parts,
whereas the second includes the parts’
interior pixels. After the extraction, the
system separates different organ parts
using the following process.
First, we align the cryosection images
and the colored images as two corresponding
stacks according to their original
3D cross-section relations; thus,
each image has a z coordinate identified
according to its thickness, which defines
a 2D plane in 3D in which each pixel
has (x, y, z) coordinates. The cryosection
image stack is a set of volume data.
The colored image stack is another set
of volume data. Each pixel in the first
set has a corresponding pixel in the second.
A color table T includes all artificial
colors used in the colored
images—one color corresponding to
each organ part (see Figure 1).
Next, we use VHASS to create a set
of structures to store the generated
boundaries and interior parts (volume
data) from the colored images according
to their colors. Then, VHASS uses
a pixel in the colored image as an index
to extract the same pixel in the cryosection
image and retrieve the natural-
VIRTUAL HUMAN ANATOMY
By Lin Yang, Jim X. Chen, and Yanling Liu
TO LEARN HUMAN ANATOMY, MEDICAL STUDENTS MUST
PRACTICE ON CADAVERS, AS MUST PHYSICIANS WHEN
THEY WANT TO BRUSH UP ON THEIR ANATOMY KNOWLEDGE.
HOWEVER, CADAVERS ARE IN SHORT SUPPLY IN MEDICAL SCHOOLS
y
0 x 0 x
y
Figure 1. A pixel extraction. The
cryosection image at left has a
corresponding colored image (right).
72 COMPUTING IN SCIENCE & ENGINEERING
color information for storage in the new
data structure.
After creating the data structures,
VHASS retrieves the tissue boundaries.
Intuitively, the system can extract
boundary pixels by applying edge detection—
a mature and widely used technology1—
on the colored images. We
integrated an open-source IM digital
imaging toolkit (www.tecgraf.puc-rio.br/
im) into our system to extract the boundary
information and used Canny’s edge detector1
to perform the edge detection.
The system uses an image’s extracted
boundaries to build 3D surface models of
different organ parts. Figure 2 shows the
extracted boundary slices in 3D.
After extracting the tissue boundaries,
we retrieve all interior tissues and
save them separately. Extracting interior
pixels is much simpler than extracting
boundary pixels—for each
color in the color table T, we simply
convert all matching pixels into 3D
points. However, the size of the extracted
output requires extensive memory.
In our experiment, the interior
points of human skin from the first 200
processed images required more than
600 Mbytes of storage space. Although
upgrading to add memory is a simple
solution, a better one was to optimize
our system so that it would run well on
machines with different configurations.
Thus, our system can dynamically allocate
memory based on the current task
and successfully extract points (3D pixels)
totaling more than 2.5 Gbytes on a
machine with a 512-Mbyte memory.
During extraction, we found that
some parts in the colored images had
slightly bigger or smaller areas than the
actual organ parts on the cryosection images.
This was due to human error—the
colored images that are the basis for feature
extraction might not have been separated
exactly according to human parts,
thus resulting in incorrect retrieval. This
problem was more obvious for outerboundary
pixels of human skin: on the
colored image, these pixels sometimes
ended up outside the body. To solve this
problem, we discard pixels on a colored
image if the corresponding pixels on
cryosection images have the background
color. We must do more work in this
area to improve retrieval accuracy.
After the extraction process, our retrieved
3D points correspond to different
tissue parts and have geometry,
volume, and color information. Additionally,
the system uses the generated
3D cross-sections to interpolate and generate
3D volume pixels that don’t lie on
cross-sections—in both color and
cryosection images—for separate organs,
which lets users visualize the volume of
interior parts using volume rendering.
Figure 3 shows components of the human
brain in different color volumes.