Expressive Performance in Ensembles of Humans and Computers
Time: Wed 2015-05-27 13.15 - 15.00
Location: TMH, 5th floor, Lindstedtsvägen 24
Participating: Roger Dannenberg
Expressive timing in music performance has been studied mainly as the product of a single performer or perhaps the leader of an ensemble. I will describe two projects concerned with ensemble performance. The first, "Human-Computer Music Performance" aims to develop computer systems that can play music with humans. Studies of mostly steady-beat music performances cast some light on the stability of tempo, the precision of tapping, and how to synchronize computers and humans with both flexibility and accuracy. The second project, with Gus Xia, applies state-of-the-art machine learning techniques to create models of piano duet performance. These models capture elements of expressive timing within a performer's part as well as timing adjustments that each player makes to synchronize with the other. The best models can learn from a small number of rehearsals and outperform current computer accompaniment systems in estimating note onset times in the near-future.
--------------------------
Roger Dannenberg is Professor of Computer Science, Art, and Music at Carnegie Mellon University. He is well known for his computer music research, especially in real-time interactive systems. His pioneering work in computer accompaniment led to the SmartMusic system now used by tens of thousands of music students. Other research interests include the application of machine learning to music style classification and the automation of music structure analysis. He is the co-creator of the well-known audio editor Audacity.