Developing Data-Driven Models for Understanding Human Motion
Time: Fri 2024-02-16 14.00
Location: F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm
Video link: https://kth-se.zoom.us/j/62347635904
Language: English
Subject area: Computer Science
Doctoral student: Wenjie Yin , Robotik, perception och lärande, RPL
Opponent: Professor Anders Heyden, Computer Vision and Machine Learning, Lund University, Lund, Sweden
Supervisor: Associate professor Mårten Björkman, Robotik, perception och lärande, RPL; Professor Danica Kragic, Robotik, perception och lärande, RPL
QC 20240117
Abstract
Humans are the primary subjects of interest in the realm of computer vision. Specifically, perceiving, generating, and understanding human activities have long been a core pursuit of machine intelligence. Over the past few decades, data-driven methods for modeling human motion have demonstrated great potential across various interactive media and social robotics domains. Despite its impressive achievements, challenges still remain in analyzing multi-agent/multi-modal behaviors and in producing high-fidelity and highly varied motions. This complexity arises because human motion is inherently dynamic, uncertain, and intertwined with its environment. This thesis aims to introduce challenges and data-driven methods of understanding human motion and then elaborate on the contributions of the included papers. We present this thesis mainly in ascending order of complexity: recognition, synthesis, and transfer, which includes the tasks of perceiving, generating, and understanding human activities.
Firstly, we present methods to recognize human motion (Paper A). We consider a conversational group scenario where people gather and stand in an environment to converse. Based on transformer-based networks and graph convolutional neural networks, we demonstrate how spatial-temporal group dynamics can be modeled and perceived on both the individual and group levels. Secondly, we investigate probabilistic autoregressive approaches to generate controllable human locomotion. We employ deep generative models, namely normalizing flows (Paper B) and diffusion models (Paper C), to generate and reconstruct the 3D skeletal poses of humans over time. Finally, we deal with the problem of motion style transfer. We propose style transfer systems that allow transforming motion styles while attempting to preserve motion context through GAN-based (Paper D) and diffusion-based (Paper E) methods. Compared with previous research mainly focusing on simple locomotion or exercise, we consider more complex dance movements and multimodal information.
In summary, this thesis aims to propose methods that can effectively perceive, generate, and transfer 3D human motion. In terms of network architectures, we employ graph formulation to exploit the correlation of human skeletons, thereby introducing inductive bias through graph structures. Additionally, we leverage transformers to handle long-term data dependencies and weigh the importance of varying data components. In terms of learning frameworks, we adopt generative models to represent joint distribution over relevant variables and multiple modalities, which are flexible to cover a wide range of tasks. Our experiments demonstrate the effectiveness of the proposed frameworks by evaluating the methods on our own collected dataset and public datasets. We show how these methods are applied to various challenging tasks.