A multi-modal dialogue system for finding apartments in Stockholm
The AdApt project had as its goal to be the foundation for the development and evaluation of advanced multimodala spoken dialogue systems.
Within the project, a spoken dialogue system was developed in which a user could cooperate with an animated talking agent to solve more complex problems than what had previously been achieved in our systems.
The AdApt project was a joint project between TMH and Telia Research within the framework of CTT (the Centre for Speech Technology)
The domain chosen was one where multimodal communication matters, and one that engaged a broad audience: the real estate market in downtown Stockholm. In the Adapt system, the computer played a part close to that of the real estate broker: to help people find apartments, to describe apartments, to reply to and answer questions, and to lend support by finding information in apartment ads.
The information used by the system came from authentic apartment ads published on the internet.
In addition to spoken input, the user could provide information by clicking or marking areas on an interactive map over downtown Stockholm. The system output consisted of a talking, animated head and graphical animated icons on the interactive map. The system is also capable of presenting information as text in the form of tables, although this was rarely used.
Group: Speech Communication and Technology
Rolf Carlson (Project leader)
Duration: 1998-01-01 - 2004-12-31
Spoken Multimodal Human-Computer Dialogue in Mobile Environments, Text, Speech and Language Technology (pp. 93-113). Dordrecht, The Netherlands: Kluwer Academic Publishers. [abstract] [pdf] (2005). A model for multi-modal dialogue system output applied to an animated talking head. In Minker, W., Bühler, D., & Dybkjaer, L. (Eds.),
Proceedings of The 8th Workshop on the Semantics and Pragmatics of Dialogue Catalogue'04 (pp. 19-21). Barcelona. [pdf] (2004). Contextual reasoning in multimodal dialogue systems: two case studies. In
Linguistic adaptations in spoken human-computer dialogues. Empirical studies of user behavior. Doctoral dissertation. [pdf] (2003).
Proc of ICSLP 2002 (pp. 181-184). Denver, Colorado, USA. [abstract] [pdf] (2002). Specification and realisation of multimodal output in dialogue systems. In
Proc of ISCA Workshop Multi-Modal Dialogue in Mobile Environments. Kloster Irsee, Germany. [abstract] [pdf] (2002). GESOM - A model for describing and generating multi-modal output. In
Proc of ISCA Workshop Multi-Modal Dialogue in Mobile Environments. Kloster Irsee, Germany. [abstract] [pdf] (2002). Turn-taking gestures and hour-glasses in a multi-modal dialogue system. In
Developing multimodal spoken dialogue systems. Empirical studies of spoken human-computer interaction. Doctoral dissertation, KTH. [pdf] (2002).
Proc of the ISCA Workshop Multi-Modal Dialogue in Mobile Environments. Kloster, Irsee, Germany. [abstract] [pdf] (2002). Constraint manipulation and visualization in a multimodal dialogue system. In
Proceedings of NAACL 2001 Workshop: Adaptation in Dialogue Systems. Pittsburgh, PA. [abstract] [pdf] (2001). Real-time handling of fragmented utterances. In
Linguistic adaptations in spoken and multimodal dialogue systems. Licentiate dissertation, KTH. (2000).
Proc of Götalog 2000, Fourth Workshop on the Semantics and Pragmatics of Dialogue (pp. 29-34). Gothenburg. [pdf] (2000). Modality convergence in a multimodal dialogue system. In Poesio, M., & Traum, D. (Eds.),
Proc of ICSLP 2000, 6th Intl Conf on Spoken Language Processing (pp. 626-629). Beijing. [pdf] (2000). A comparison of disfluency. Distribution in a unimodal and a multimodal speech interface. In Yuan, B., Huang, T., & Tang, X. (Eds.),
Proc of ICSLP 2000, 6th Intl Conf on Spoken Language Processing (pp. 589-592). Beijing. [pdf] (2000). Positive and negative user feedback in a spoken dialogue corpus. In Yuan, B., Huang, T., & Tang, X. (Eds.),
Proc. of ICSLP 2000, 6th Intl Conf on Spoken Language Processing (pp. 134-137). Beijing: China Military Friendship Publish. [abstract] [pdf] (2000). AdApt - a multimodal conversational dialogue system in an apartment domain. In Yuan, B., Huang, T., & Tang, X. (Eds.),