Skip to main content

TIG – Timing of intonation and gestures in spoken communication

The goal with the project is to understand timing relationships between intonation and gesture in spontaneous speech. This will be investigated through semi-automatic extraction of co-speech gestures from a large and varied dataset (audio, video, motion-capture), and analysis of function and synchronization of speech and gestures.

The melody of speech, or intonation, plays a crucial role in spoken interaction. By altering the speech melody, speakers can highlight important words and phrases making them prominent and more meaningful. Speakers also make use of changing melodies and rhythms to signal when it is time for the other speakers to talk (turntaking) as well as to give others feedback (such as mm or uhuh). The exact timing of melodies in speech is controlled with considerable precision by the speaker. These movements occur in particular places in relationship to syllables. Body and facial gestures regularly accompany the speech melody and often have the same function as intonation, but until now we have not been able to measure the timing of these gestures with the same precision as intonation. The aim of this research project is to measure with precision the timing relationship between the speech melodies and gestures using a large database of recorded conversations in Swedish. The participants have been recorded using high-quality audio and video and motion capture equipment in a specially designed studio. The results will have implications for our understanding of how speech and gestures are planned and coordinated in the brain, and will also enable better modeling of speech and gestures in such speech applications as robots and avatars.

Staff:
David House (Project leader)

Funding: RJ (Bank of Sweden Tercentenary Foundation)

Duration: 2012-08 - 2017-01

Page responsible:Web editors at EECS
Belongs to: Speech, Music and Hearing (TMH)
Last changed: Dec 04, 2017