Contact




VirtualRobot

Exploring situated interaction with social robots using augmented reality

In this project, we aim aim to explore the use of Augmented Reality (AR) to investigate the impact of multimodal behaviour (speech, facial expression, full-body motions, conversational formations) and embodiment on turn-taking and joint attention in human-robot interaction.

In this project, we aim aim to explore the use of Augmented Reality (AR) to investigate the impact of multimodal behaviour (speech, facial expression, full-body motions, conversational formations) and embodiment on turn-taking and joint attention in human-robot interaction. AR is a technology that facilitates the overlay of computer graphics constructs onto the real world. AR differs from Virtual Reality (VR) in that AR does not completely replace the environment, but rather enhances it. This is important for social interaction, since it allows the user to directly perceive her own body in a natural way. Also, physical objects (such as a table) can be mixed with virtual objects. This technology allows the potential for the development of virtual replicas of robots, and test them out in interaction with humans, before they are manufactured. Our first objective for the project is to integrate this technology with our existing open-source software platform for conversational human-robot interaction called IrisTK (www.iristk.net), and open-source full-body animation and group formation controllers being developed in Unity 3D (www.unity3d.com), a powerful computer game engine. By combining these tools, we will create a novel research platform for social robotics that can be used at KTH and elsewhere. Our second objective is to use this platform to study how the design of the robot affects turn-taking and joint attention in human-robot interaction. In this scenario, one human and two robots will be positioned at a table and collaborate to solve a task. AR will be used not only to simulate the two robots, but also objects on the table that can be manipulated and therefore constitute a target for joint attention. Our third objective is to use the research platform to study how mobile robots can better engage in multi-party interaction and how the spatial formation of small groups affects the speaker roles of the users and the robots.

Group:

Staff:
Gabriel Skantze (Project leader)
Christopher Peters

Funding: SRA/KTH

Duration: 2017 - 2019

Related publications:







Published by: TMH, Speech, Music and Hearing
Webmaster, webmaster@speech.kth.se

Last updated: 2012-11-09