Research into new interaction devices has been a hot topic for years. This is an important topic due the fact that some game applications are used to help rehabilitation or as a complementary therapy, instead of being used just for fun [2].
In addition to conventional devices used to provide input information for the game, such as joysticks, important work is also being done where input is made by gesture recognition, which can be divided in two broad approaches: sensor-based and computer-vision-based.
In the sensor-based approach an object is tracked by a sensor to provide data input.
In this category we can cite sensors such as gloves and motion trackers that measure displacement in relation to a fixed coordinate system [3].
In computer-vision-based approaches, tracking is done by vision algorithms. In this category, the users normally do not need any tangible device to interact with the game [4].