Skip to main content

Nordic SMC

The Nordic Sound and Music Computing Network (NordicSMC) brings together a group of internationally leading sound and music computing researchers from all five Nordic countries, from Aalborg University (AAU), Aalto University (AALTO), KTH Royal Institute of Technology (KTH), University of Iceland (UoI), and University of Oslo (UiO).

The constellation is unique in that the network covers the field of sound and music from the “soft” to the “hard,” including the arts and humanities, the social and natural sciences, and with a high level of technological competency.

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.
smcnetwork.org/

What is SMC?
The Sound and Music Computing (SMC) research field approaches the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modelling and generating sound and music through computational approaches.
There are numerous examples of how the SMC field has proven its huge financial potential in the last decades. For example, SMC innovations led to the development of an entirely new business line in music consumption, with Apple launching their iPod music players and iTunes online music shop. It can be said that this change from computers to music and sound started Apple’s route to become the world’s most valuable company. Part of the success of smart phones, including the iPhone, is the fact that the
phone became the primary music player for young generations. SMC research has been essential for this development, both from conceptual and technological perspectives.
The shift in music consumption has continued with music streaming services pushed forward by the Swedish company Spotify and the Norwegian company Wimp (now Tidal). In less than ten years they have re-energized the economies of a pirate-ridden music industry, and launched the idea of selling music as a “service” in which consumers have access to millions of songs from their devices. This would not have been possible without SMC research in digital audio coding technology.
Today’s research in the SMC community is likely to lead to similar technology and culture changes in the coming decades. Results from music information retrieval will make it possible to search within the content of the music itself, there will be new modes of music interaction, and there is an endless need for improving the quality of sound being stored, transmitted and played using different types of technologies. Fortunately, many Nordic SMC researchers are in the forefront of these developments.

Results

Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot

Supplementary material data, sounds and videos

From Vocal-Sketching to Sound Models

Supplementary material  code, transcriptions, sounds and videos

Sonic Characteristics of Robots in Films

Supplementary material  videos

Team

Funding

  • NordForsk’s Nordic University Hub “Nordic Sound and Music Computing Network\NordicSMC'', project number 86892.

Duration of the project: 2018-2023

Publications

[1]
A. B. Latupeirissa and R. Bresin, "PepperOSC: enabling interactive sonification of a robot's expressive movement," Journal on Multimodal User Interfaces, vol. 17, no. 4, pp. 231-239, 2023.
[2]
C. Panariello and R. Bresin, "Sonification of Computer Processes : The Cases of Computer Shutdown and Idle Mode," Frontiers in Neuroscience, vol. 16, 2022.
[3]
E. Myresten, D. Larson Holmgren and R. Bresin, "Sonification of Twitter Hashtags Using Earcons Based on the Sound of Vowels," in Proceedigns of the 2nd Nordic Sound and Music Computing Conference, 2021.
[5]
R. Bresin et al., "Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound," in Workshop on Sound in Human-Robot Interaction at HRI 2021, 2021.
[6]
A. B. Latupeirissa and R. Bresin, "Understanding non-verbal sound of humanoid robots in films," in Workshop on Mental Models of Robots at HRI 2020 in Cambridge, UK, Mar 23rd 2020, 2020.
[7]
A. B. Latupeirissa, C. Panariello and R. Bresin, "Exploring emotion perception in sonic HRI," in 17th Sound and Music Computing Conference, 2020, pp. 434-441.
[8]
C. Panariello et al., "From vocal sketching to sound models by means of a sound-based musical transcription system," in Proceedings of the Sound and Music Computing Conferences, 2019, pp. 167-173.
[9]
A. B. Latupeirissa, E. Frid and R. Bresin, "Sonic characteristics of robots in films," in Proceedings of the 16th Sound and Music Computing Conference, 2019, pp. 1-6.
[10]
E. Frid, R. Bresin and S. Alexanderson, "Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids," in Proceedings of the 15th Sound and Music Computing Conference, 2018.
[11]
A. E. Vijayan et al., "Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior," in 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, pp. 1955-1961.
[12]
S. Alexanderson et al., "Mimebot—Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments," ACM Transactions on Applied Perception, vol. 14, no. 4, 2017.