Skip to main content

Machine learning for animal sounds: from automatic bird recognition to vocal behaviour analysis

Time: Wed 2019-01-23 15.30

Location: Fantum

Participating: Dan Stowell

Export to calendar

Abstract:
Machine learning has revolutionised automatic speech recognition. Can it
do the same for other sounds we hear - such as bird sounds? This is an
urgent problem since wildlife monitoring is crucial for understanding
massive declines in wildlife populations, as well as future population
movements caused by climate change. Further, bird sound sequences are
structurally different from human vocal sequences, and so new analysis
methods are needed.

In this talk we will look at state-of-the-art methods in machine
learning for analysing audio signals, and our use of them including in a
birdsong recognition app used by thousands of people around the UK. We
also describe new methods we developed to dig inside the detail of sound
recordings and analyse sound sequences with multiple simultaneous
vocalisers. The methods described can be applied to a wide range of tasks.


Bio:
Dan Stowell is senior researcher in machine listening - which means
using computation to understand sound signals. He co-leads the Machine
Listening Lab at Queen Mary University of London, based in the Centre
for Digital Music, and is also a Turing Fellow at the Alan Turing
Institute. Dan has worked on voice, music and environmental soundscapes,
and is currently leading a five-year EPSRC fellowship project
researching the automatic analysis of bird sounds. His first degree was
from Cambridge University, and his PhD from Queen Mary University of London.