StyleBot - Style-adaptation of non-verbal behaviour for social robots

This project aims to investigate fundamental co-adaptation behaviours in face-to-face human robot interaction, and collect the data and knowledge required to build social robots that are able to engage in face-to-face interaction and continuously adapt their behaviour towards that of the user.

As humans, we continuously adapt our behaviour to the situation we are present in. In communication with other humans, we automatically adjust our communicative style in so that it is appropriate for the context. This continuous adaptation is an inherent part of human functioning, and one of the key success factors to human communicative abilities. It is also an aspect that is fundamentally lacking in current speech based interfaces in VPAs (virtual personal assistants) and social robots. Communicative adaptation in humans works on many levels and serves multiple purposes. For example, it ensures information transfer while conserving energy (in a noisy train station, or when addressing a crowd of people, we speak up and articulate clearer in order to be heard, but in a quiet room or a library we adopt a soft tone of voice). It also has a crucial social dimension: In face-to-face interaction, we reliably adopt poses, facial expressions, mannerisms and speaking styles of the person we are talking to. This phenomenon, known as behavioural mimicry, emerges during infancy and is a fundamental mechanism in how we learn social codes and behaviours, but stays throughout our lives as an important social function. Mimicry has been shown to increase empathy, liking and affiliation between the individuals, and it has been referred to as a social glue because it helps to create strong ties and relationships. In addition, mimicry has been shown to increase learning and attention in tutoring situations.
  Social robots are currently being explored and developed for applications in an increasing number of fields such as education, service, retail, health, elderly care, simulation and training and entertainment. It can be expected that for some of these areas, that involve long-term or sustained interaction (e.g. a personal tutor or a companion robot) the ability to adapt in order to increase engagement, liking and attention will be an important factor of success.
This project will investigate, in a data-driven manner, how a range of non-verbal behaviours could be implemented and adapted based on user behaviour, how these signals should best be manifested in a social robot to best support the interaction, and what the effect of such behavioural adaptation is.  
    Funding: VetenskapsrĂ„det #2018-05409
Duration: 2019-01-01 - 2022-12-31  
Page responsible:Web editors at EECS
Belongs to: Speech, Music and Hearing
Last changed: Jan 17, 2020