Our application uses images from a low-cost web camera placed in front of the work area, see Fig. 1,
where the recognised gestures act as the input for a computer 3D videogame. Thus, the players, rather than
pressing buttons, must use different gestures that our application should recognise. This fact, adds the
complexity that the response time must be very fast. Users should not appreciate a significant delay between
the instant they perform a gesture or motion and the instant the computer responds. Therefore, the algorithm
must provide real-time performance for a conventional processor. Most of the known hand tracking and
recognition algorithms do not meet this requirement and are inappropiate for visual interface. For instance,
particle filtering-based algorithms can maintain multiple hypotheses at the same time to robustly track the
hands but they need high computational demands [4]. Recently, several works have been presented for
reducing the complexity of particle filters, for example, using a deterministic process to help the random
search [5]. However, these algorithms only work in real-time for a reduced size hand and in our application,
the hand holds most of the image