Research overview
Computational Aesthetics
In this theme we explore the aesthetic aspects of human communicative behavior. For example, what are the mechanisms of communication between a musical conductor and an orchestra, how do the musicians interpret the conductor motion? Why do humans get stimulation from watching a dancer? And, how can a completely different embodiment, e.g., a swarm of drones, express feelings and attitudes while performing on stage? We investigate these questions in collaboration with a range of performing arts professionals; musicians, conductors, and dancers.
Projects
- OrchestrAI: Deep generative models of the communication between conductor and orchestra (WASP, SeRC 2023-present)
Group members
- Mert Mermerci (PhD student)
- Pranav Rajan (MSc student)
- Bin Zhou (MSc student)
AI for Life Science
In this theme, the main focus is how human cognitive processes and health status can be inferred from observable behavior. Past and present such projects include computerized analysis of cognitive decline and motion analysis to detect motor disease in infants.
Projects
- The relation between motion and cognition in infants (SeRC 2023-present)
- UNCOCO: UNCOnscious COmmunication (WASP 2023-present)
Group members
- Chen Ling (PhD student)
- Ingrid Strohm (affiliated MSc student)
AI for Animal Science
In this area we develop models of animal behavior and non-verbal communication. This includes modeling dog communication, and automatic video detection of signs of pain in horses. An important current strand of research is our work on creating accurate 3D pose and shape models of horses and dogs.
Projects
- ANITA: ANImal TrAnslator (VR 2024-present)
- MARTHA: MARkerless 3D capTure for Horse motion Analysis (KTH, FORMAS 2020-present)
Group members
- Theo Wieland (PhD student)
- João Moreira Alves (affiliated PhD student)
- Jeanne Parmentier (affiliated PhD student)
Embodied AI
In this theme we develop methodologies for robots, cars, dialogue systems and other autonomous agents to perceive the world and create meaningful computer representations of it through sensors, primarily vision. My earlier research concerned affordances, object-action recognition and robot learning from human demonstration. Recent projects have a more general Computer Vision and Machine Learning focus.
Projects
- Evaluation of generative models (WASP 2024-present)
- Generative AI for the creation of artificial spiderweb (WASP, DDLS 2023-present)
- STING: Synthesis and analysis with Transducers and Invertible Neural Generators (WASP 2022-present)
Group members
- Filippo De Girolamo (Post doc)
- Neeru Dubey (Post doc)
- Siyuan Yang (Post doc)
- Gui Vasconcelos (affiliated Post doc)
- Silvia Arellano García (PhD student)
- Yifan Lu (PhD student)
- Erik Lidbjörk (MSc student)
- Selsabeel Mohamed (MSc student)
- Sarah Secci (affiliated MSc student)