CORDIAL

Coordination of Attention and Turn-taking in Situated Interaction

Conversation relies on a close coordination between the speakers. Since it is difficult to speak and listen at the same time, they have to coordinate turn taking, and since human cognition is limited by the current focus of attention, they have to achieve a "joint focus of attention". This coordination is facilitated by several different means, including gaze, pointing gestures, prosody, syntax and semantics. The main goal of the proposed project is to investigate how these means are used to coordinate turn-taking and joint attention, and to gain understanding of the complex interplay between these phenomena, as well as other closely related phenomena such as feedback and grounding. We will use an analysis-by-synthesis approach: First we will design an experimental setup where subjects are given a collaborative task, which involves face-to-face conversation about physical objects on a table. These interactions will be recorded and annotated, and the data will then be used to develop computational models of the observed phenomena and implement a robot that can reproduce some of the behaviour. By letting human subjects interact with the robot in this setting, we can study how the robot´s behaviour affects the subjects´ behaviour. We also plan to compare dyadic interactions with multi-party interactions. The results from the project will not only provide insights on the studied phenomena, but also have important implications for applications such as social robots.

Staff:
Gabriel Skantze (Project leader)

Funding: VR (2013-1403)

Duration: 2014-01-01 - 2018-12-31

Page responsible:Web editors at EECS
Belongs to: Speech, Music and Hearing
Last changed: Dec 12, 2016