Abstract:
Gesture generation is one of the most important tasks in humanoid interfaces because hand gestures by humanoid robots and animated agents are useful in improving the comprehensibility of conversation content. This study proposes a method for automatically generating iconic drawing gestures using image processing and machine learning techniques. First, we collected a set of graphic images for over 1000 objects and classified the objects into 4 types of shapes; these shapes were used as the drawing gesture shapes. By implementing a gesture shape decision mechanism, we also built a system that takes a sentence as the system input and produces hand gesture animations that are synchronized with synthetic speech.