Speech, gaze and gesturing - multimodal conversational interaction with Nao robot
In this talk I will share my experiences from a month long project-workshop on programming a Nao robot for multimodal conversational interaction. During the project we implemented on Nao a spoken dialogue system that supports open-domain conversations using Wikipedia as a knowledge source. In this process, we explored how the various sensory modalities in Nao – vision, sound and speech perception/synthesis, body and head movements, could be used for developing multimodal interaction capabilities in Nao. We conducted a user study to evaluate five different aspects of the system. I will present the preliminary results from this study.