Abstract- Researchers in sign language recognition
customized different sensors to capture hand signs. Gloves,
digital cameras, depth cameras and Kinect were used
alternatively in most systems. Due to signs closeness, input
accuracy is a very essential constraint to reach a high
recognition accuracy. Although previous systems accomplished
high recognition accuracy, they suffer from stability in realistic
environment due to variance in signing speed, lighting, etc ... In
this paper, a recognition system for ArSL has been developed
based on a new digital sensor called "Leap Motion". This
sensor tackles the major issues in vision-based systems such as
skin color, lighting etc... Leap motion captures hands and
fingers movements in 3D digital format. The sensor throws 3D
digital information in each frame of movement. These
temporal and spatial features are fed into a Multi-layer
perceptron Neural Network (MLP). The system was tested on
50 different dynamic signs (distinguishable without nonmanual
features) and the recognition accuracy reached 88%
for two different persons. Although Leap motion tracks both
hands accurately, unfortunately Leap motion does not track
non-manual features. This system can be enhanced by adding
other sensors to track other non-manual features such as facial
expressions and body poses. The proposed sensor can work
simultaneously with leap motion to capture all sign's features.