In this presentation, I will outline the importance of gestures and physical movement in human communication, and discuss scenarios in which human-robot interfaces can benefit from gestures: humanoid robots need fast and reasonably-robust algorithms to recognize people and their actions. In previous work, I examined human reactions to robot movements; now, I tackle the opposite problem and wish to build the tools necessary for a robot to recognize human motion (and subsequently react).
Existing gesture recognizers usually have a number of limitations, including: invasive sensors, assumption of highly reliable human tracking, small vocabulary, spatial restrictions. Neuroscientific evidence shows links between speech and gesture perception. I suggest to adapt existing speech recognition techniques to gestural data, which, however, presents its own peculiarities and challenges. I will show preliminary results with Hidden Markov Models employed to learn patterns of time-varying human joint 3D coordinates.