B. Orientation and Mobility
Given limited dynamic range, the ability to perceive small
trip hazards can be impaired if they are of low contrast,
particularly in the reduced dynamic range of prosthetic
vision. Vision processing may augment the representation
to ensure obstacle visibility from its background despite the
differences in intensity (or depth if depth is represented
on phosphenes instead) being insufficient to appear under
expected quantization.
In [15] we demonstrated a system for finding the ground
plane and ensuring that ground-based obstacles are apparent
in the visual scene. Here, the ground plane was detected in
disparity images taken by a stereo rig mounted on a skate
board helmet and worn by the participant. The approach took
particular care to find boundaries of objects with the ground
plane, including the walls and trip hazards. The scene can be
represented as a depth image to overcome problems of depth
perception in low dynamic range visualizations, and the
contrast of these boundaries was increased, so that potential
trip-hazard obstacles pop-out of the visual scene. Figure 3
shows how a potential trip-hazard obstacle of low contrast
can be difficult to see in a regular simulation of prosthetic
vision, but that using an augmented depth representation it
can be clearly differentiated.
It is necessary to evaluate the performance of vision
processing algorithms for prosthetic vision before deployment.
One way to do this is perform this evaluation using
simulation software with normally sighted participants. For
this purpose, we have developed real-time software that
allows for customization of input image streams and rendered
phosphene streams [14]. This simulated prosthetic vision
software was used to produce the simulated prosthetic vision
images shown in this paper.