IUT Montreuil logo
KTH CSC logo


Greta Listening to Expressive Music

In this page we present an application in which the embodied conversation agent Greta is listening to expressive music performance. Starting from studies on the expressivity in Embodied Conversational Agents (ECAs) and in music performance we developed a system in which visual feedback of an acoustic source is given to the user using a graphical representation of a human face.

The expressive behavior of Greta depends on the emotional and acoustic characteristics of the music performance she is listening to. The system incorporates a previously developed system realized by KTH Stockholm and Uppsala University for the real-time extraction and analysis of acoustic cues from the music performance. It has been interfaced with the ECA Greta developed at the University of Rome ``La Sapienza'' and University of Paris 8.

Youtube video


Acknowledgments

This project was originally started as an interchange within the HUMAINE NoE called ”From acoustic cues to expressive ECAs”


References


R. Bresin. What color is that music performance? In International Computer Music Conference - ICMC 2005, Barcelona, 2005.

R. Bresin and P. N. Juslin. Rating expressive music performance with colours. Manuscript submitted for publication, 2005.

M. Mancini, R. Bresin, and C. Pelachaud. Acoustic cues driven facial expression. Special Issue of The IEEE Transactions on Speech and Audio Processing Expressive Speech Synthesis, 2005.

M. Mancini, R. Bresin, and C. Pelachaud. From acoustic cues to expressive ecas. In The 6th International Workshop on Gesture in Human-Computer Interaction and Simulation. VALORIA, Université de Bretagne Sud, France, 2005.

A. Friberg, E. Schoonderwaldt, and P. N. Juslin. Cuex: An algorithm for extracting expressive tone variables from audio recordings. Acoustica united with Acta Acoustica, in press.

B. Hartmann, M. Mancini, and C. Pelachaud. Towards affective agent action: Modelling expressive ECA gestures. In Proceedings of the IUI Workshop on Affective Interaction, San Diego, CA, January 2005.

M. Mancini, B. Hartmann, C. Pelachaud, A. Raouzaiou, and K. Karpouzis. Expressive avatars in MPEG-4. In IEEE International Conference on Multimedia & Expo, Amsterdam, 2005.

A. Friberg, E. Schoonderwaldt, P. N. Juslin, and R. Bresin. Automatic realtime extraction of musical expression. In International Computer Music Conference - ICMC 2002, pages 365–367, San Francisco, 2002. International Computer Music Association.