Increasing the Efficiency in the Creation of Procedural Audio Models for Video Games

The use of procedural audio in video games presents several benefits, especially in highly interactive environments like in VR, where haptics controllers are commonly used.

Having a synthesizer changing its parameters in real-time depending on the player’s actions, the geometry of the game object, or its material, can increase the immersion, extend the current sound design palette, reduce memory demands, and improve the way audio responds within the game. The main objectives of this research are to find more efficient strategies to create procedural audio models for video games, and, with this goal in mind, to improve current procedural audio models in terms of sound quality and interactivity. The research will use physically informed synthesis in the current video game context with a focus on improving the human-computer interaction. It will explore DSP and machine learning approaches, focusing both on the creation of starting points for the synthesizer to develop a model and on end to end models.

Team
Adrián Barahona PhD Student University of York (UK)
Dr Jez Wells (Principal Supervisor) University of York (UK)
Dr Sandra Pauletto (Co-supervisor) KTH

Funding
Engineering Physical Science Research Council UK, Centre for Doctoral Training in Intelligent Games & Game Intelligence (IGGI) [EP/L015846/1]

Publications

[1]
A. Barahona och S. Pauletto, "Perceptual Evaluation of Modal Synthesis for Impact-Based Sounds," i Sound and Music Computing Conference (SMC), 2019.