Skip to main content


Robust Non-Verbal Expression in Virtual Agents and Humanoid Robots: New Methods for Augmenting Stylized Gestures with Sound

Expression capabilities in current humanoid robots are limited because of the low number of available degrees of freedoms compared to humans. Body motion can be successfully communicated with very simple graphical representations (i.e. point-light display) and cartoonish sounds.
The aim of this project is to establish new methods based on sonification of simplified movements for achieving a robust interaction between users and humanoid robots and virtual agents, by combining competences of the research team members in the fields of social robotics, sound and music computing, affective computing, and body motion analysis. We will engineer sound models for implementing effective mappings between stylized body movements and sound parameters that will enable an agent to express body motion high-level qualities through sound. These mappings are paramount for supporting feedback to and understanding body motion.
The project will result in the development of new theories, guidelines, models, and tools for the sonic representation of body motion high-level qualities in interactive applications. This work is part of the growing research fields known as data sonification, interactive sonification, embodied cognition, multisensory perception, non-verbal and gestural communication in robots.


Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot

Supplementary material data, sounds and videos

From Vocal-Sketching to Sound Models

Supplementary material  code, transcriptions, sounds and videos

Sonic Characteristics of Robots in Films

Supplementary material  videos



  • KTH Small Visionary Project grant (2016)
  • Swedish Research Council, grant 2017-03979
  • NordForsk’s Nordic University Hub “Nordic Sound and Music Computing Network\NordicSMC'', project number 86892.

Duration of the project: 2018-2021


R. Bresin et al., "Robust Non-Verbal Expression in Humanoid Robots: NewMethods for Augmenting Expressive Movements with Sound," in Workshop on Sound in Human-Robot Interaction at HRI 2021, 2021.
A. B. Latupeirissa and R. Bresin, "Understanding non-verbal sound of humanoid robots in films," in Workshop on Mental Models of Robots at HRI 2020 in Cambridge, UK, Mar 23rd 2020, 2020.
A. B. Latupeirissa, C. Panariello and R. Bresin, "Exploring emotion perception in sonic HRI," in 17th Sound and Music Computing Conference, 2020, pp. 434-441.
C. Panariello et al., "From vocal sketching to sound models by means of a sound-based musical transcription system," in Proceedings of the Sound and Music Computing Conferences, 2019, pp. 167-173.
A. B. Latupeirissa, E. Frid and R. Bresin, "Sonic characteristics of robots in films," in Proceedings of the 16th Sound and Music Computing Conference, 2019, pp. 1-6.
E. Frid, R. Bresin and S. Alexanderson, "Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids," in Proceedings of the 15th Sound and Music Computing Conference, 2018.
A. E. Vijayan et al., "Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior," in 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, pp. 1955-1961.
S. Alexanderson et al., "Mimebot—Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments," ACM Transactions on Applied Perception, vol. 14, no. 4, 2017.