Figure 7 illustrates a modified pipeline to implement the described embodiment of the present invention. In
particular, the part circled in Figure 6 is shown in Figure 7 as a modified depth test 230 associated with a primary depth
buffer 232 and a secondary depth buffer 233. In addition, the output from the blending unit 134 is applied to an image
shader 236. A feedback from the output of the image shader is shown as an input to the texture mapping stage 220
(in the form of the shadow mask), where it is utilised in a fragment shader 238 (described later).Figure 8 is a flow chart illustrating how the shadow mask can be utilised in a rendering procedure to render
an image. Multiple passes are implemented in order to render the final picture, as follows:
a first pass for rendering the shadow map from the current light source;
generation of the shadow mask using the primary and secondary depth buffers;
a second pass for rendering the frame according to this light source from the camera p.o.v. using the shadow mask
This process is repeated for each light source.
[0041] The first step S1 illustrated in Figure 8 is initialisation of the dual depth buffers 232, 233. The primary depth
buffer 232 is created in unsigned INT representation, for use as the shadow map buffer. The secondary depth buffer
233 is created as a secondary unsigned INT channel. The buffers are initialised as described above. In the subsequent
stage S2, the scene is rendered from the light point of view and the shadow map is generated in the primary depth
buffer. At step S3, the shadow mask is generated according to the shadow mask generation technique discussed
above. Step S2 constitutes the first rendering pass, and step S3 is carried out after the first rendering pass before the
second rendering pass to be described later. After generation of the shadow mask at step S3, the primary depth buffer
is initialised for use as the classic zbuffer at step S4. The shadow map which was held in the primary depth buffer is
moved to another buffer or memory store 240 so that it is not overwritten, because it is needed in the subsequent,
second pass. Next, the camera is defined and the scene is rendered at step S5 from the camera point of view, which
constitutes the second rendering pass. At step S5, the following steps are accomplished for each pixel:
The following data is used in the fragment shader of the second pass.