Skip to main content

RepreConvAI - Representation Learning for Conversational AI

Being able to communicate with machines through spoken conversation, in the same way we naturally communicate with each other, has been a long-standing vision in both science fiction and research labs, and it has been considered a hallmark of human intelligence. In recent years, so-called Conversational AI has started to become a reality, in the form of smart speakers, voice assistants, and social robots. However, when building such systems for specific domains, data is typically only available in very limited quantities.

This problem can be addressed using representation learning, where generic models are trained in a self-supervised fashion on larger quantities of data, and then fine-tuned to be used in few-shot learning settings. Although there has been a lot of recent research addressing representation learning in the context of general language modelling, this work has mainly focused on written monologue. However, such language use is very different from spoken conversation in many ways, and there is a general lack of attempts at representation learning that combines language modelling on dialog data with speech representations. To address this challenge, we will develop representation learning tasks, model analysis tools and downstream tasks specifically targeted towards spoken conversation. 

Staff:

Livia Qian (Doctoral student)

Funding: WASP

Duration: 2022-2026

Page responsible:Web editors at EECS
Belongs to: Speech, Music and Hearing (TMH)
Last changed: Dec 09, 2021