A multi-modal dialogue system for finding apartments in Stockholm
The AdApt project had as its goal to be the foundation for the development and evaluation of advanced multimodala spoken dialogue systems.
Within the project, a spoken dialogue system was developed in which a user could cooperate with an animated talking agent to solve more complex problems than what had previously been achieved in our systems.
The AdApt project was a joint project between TMH and Telia Research within the framework of CTT (the Centre for Speech Technology)
The domain chosen was one where multimodal communication matters, and one that engaged a broad audience: the real estate market in downtown Stockholm. In the Adapt system, the computer played a part close to that of the real estate broker: to help people find apartments, to describe apartments, to reply to and answer questions, and to lend support by finding information in apartment ads.
The information used by the system came from authentic apartment ads published on the internet.
In addition to spoken input, the user could provide information by clicking or marking areas on an interactive map over downtown Stockholm. The system output consisted of a talking, animated head and graphical animated icons on the interactive map. The system is also capable of presenting information as text in the form of tables, although this was rarely used.
Duration: 1998-01-01 - 2004-12-31
Proceedings of The 8th Workshop on the Semantics and Pragmatics of Dialogue Catalogue'04 (pp. 19-21). Barcelona. [pdf](2004). Contextual reasoning in multimodal dialogue systems: two case studies. In
Linguistic adaptations in spoken human-computer dialogues. Empirical studies of user behavior. Doctoral dissertation. [pdf](2003).
Developing multimodal spoken dialogue systems. Empirical studies of spoken human-computer interaction. Doctoral dissertation, KTH. [pdf](2002).
Linguistic adaptations in spoken and multimodal dialogue systems. Licentiate dissertation, KTH.(2000).
Proc of Götalog 2000, Fourth Workshop on the Semantics and Pragmatics of Dialogue (pp. 29-34). Gothenburg. [pdf](2000). Modality convergence in a multimodal dialogue system. In Poesio, M., & Traum, D. (Eds.),
Proc of ICSLP 2000, 6th Intl Conf on Spoken Language Processing (pp. 626-629). Beijing. [pdf](2000). A comparison of disfluency. Distribution in a unimodal and a multimodal speech interface. In Yuan, B., Huang, T., & Tang, X. (Eds.),
Proc of ICSLP 2000, 6th Intl Conf on Spoken Language Processing (pp. 589-592). Beijing. [pdf](2000). Positive and negative user feedback in a spoken dialogue corpus. In Yuan, B., Huang, T., & Tang, X. (Eds.),