This paper investigates the creation of non-photorealistic illustrations
from a type of data lying between simple 2D images and full
3D models: images with both a color (albedo) and a surface normal
stored at each pixel. Images with normals combine an acquisition
process only mildly more complex than that for digital photographs
(and significantly easier than 3D scanning) with the power and flexibility
of tools similar to those originally developed for full 3D models.
We investigate methods for signal processing on images with
normals, developing algorithms for scale-space analysis, derivative
(i.e., curvature) estimation, and segmentation. These are used to
implement analogues of stylized rendering techniques such as toon
shading, line drawing, curvature shading, and exaggerated shading.
We also introduce new stylization effects based on multiscale mean
curvature shading, as well as fast discontinuity shadows. We show
that our rendering pipeline can produce detailed yet understandable
illustrations in medical, technical, and archaeological domains