Human motion prediction using wearable sensors and machine Learning
Time: Tue 2021-10-26 09.00
Subject area: Engineering Mechanics
Doctoral student: Binbin Su , Teknisk mekanik, MoveAbility Lab
Opponent: Assoc. Professor Neil Cronin, University of Jyväskylä, Finland
Supervisor: Professor Elena M. Gutierrez-Farewik, Teknisk mekanik; Assoc. Professor Christian Smith, Datavetenskap
Accurately measuring and predicting human movement is important in many contexts, such as in rehabilitation and the design of assistive devices. Thanks to the development and availability of a wide variety of sensors, scientists study human movement in many settings and capture characteristic properties unique to individuals as well as to larger study populations. Inertial measurement units (IMU), which contain accelerometers and gyroscopes, measure segment accelerations and angular velocities, and electromyography (EMG) sensors measure muscle excitation. These types of wearable sensors can be donned at the same time and can record data at a high frequency, potentially resulting in a large amount of data. Machine learning (ML) is an effective tool to extract the prominent features and make statistical inferences from data and has the potential to enhance human motion analyses through data-driven prediction. The overall aim of this thesis was to predict human motion through data-driven approaches and musculoskeletal simulations using wearable sensors and ML. A deterministic machine learning approach using a convolutional neural network (CNN) was first proposed to segment gait cycles into five phases based on experimental IMU data in subjects at different walking speeds. The proposed CNN was able to capture kinematic characteristics in raw IMU data, such as linear acceleration, rotational velocity, and magnetic field, and distinguish different gait phases. In recognizing all gait phases, it achieved an overall accuracy of 97.5% on a well-trained model, with up to 99.6% accuracy in detecting the swing phase. Our results also showed walking speed did not have a major influence on the overall gait phase recognition accuracy for people with typical gait patterns. However, while the swing was most accurately recognized, the terminal stance was least accurately recognized, and even more so at lower walking speeds.
We then developed a long short-term (LSTM) network to predict both gait phase (loading response, midstance, terminal stance, pre-swing, and swing) and gait trajectory (angular velocities of thigh, shank, and foot segments) in up to the subsequent 200ms, based on immediately prior data. The overall accuracy of gait phase prediction was up to 94%, with the swing phase the most accurately predicted (97%). Our results also showed a high correlation between predicted and true values of the angular velocity of the thigh, shank, and foot segments.
People walk on different terrains daily, for instance, level-ground walking, ramp/stairs ascent/descent, and stepping over obstacles. Movements patterns change as people move from one terrain to another, i.e. transition from one locomotion mode to another. Locomotion modes are typically labeled between two gait events, foot contact (FC) and toe off (TO). Since there is no exact instance for discriminating the transition between two locomotion modes, we identified TO as the critical gait event. We integrated locomotion mode prediction and gait event identification into one machine learning framework comprising two multilayer perceptrons (MLP), using fused data from two types of wearable sensors, namely EMG sensors and IMUs. The first MLP successfully identified FC and TO; FC events were identified accurately, and a small number of misclassifications only occurred near TO events. A small time difference (2.5 and -5.3 ms for FC and TO respectively) was found between predicted and true gait events. The second MLP correctly identified walking, ramp ascent, and ramp descent transitions with the best aggregate accuracy of 96.3%, 90.1%, and 90.6%, respectively, with sufficient prediction time before the critical events, using EMG and IMU signals as input features.
Data-driven approaches using wearable sensors are incapable of modeling the mechanism between neuromuscular control and wearable sensor outputs. Musculoskeletal simulation can, on the other hand, explain the interactions between muscular control, kinematics, and kinetics in human motion. Thus, we integrated a reinforcement learning algorithm, a reflex-based controller, and a musculoskeletal model including trunk, pelvis, and leg segments to simulate reasonably realistic human walking at different speeds. We further generated pathological gaits that may result from ankle plantarflexor weakness using the same approach. The simulated hip and knee angles correlate reasonably well with reported experimental data, though less so for ankle kinematics. The computed muscle excitations in major low limb muscles largely correspond to the expected on-off timing of these muscles during walking.
In summary, the studies in this thesis describe and predict human movement with wearable sensors and machine learning algorithms. We detected and predicted gait phases and events, predicted segment movements and identified intended transitions between walking modes during the stance phase of the previous gait cycle on the same side, before the step into the new mode, all based on data from wearable sensors. This has important potential implications in continuous monitoring and analysis of a person's movements outside a lab environment. The musculoskeletal simulation provided insight into the relationship between neuromuscular control and sensory feedback, which could also be applied to better understand and predict likely changes in gait when changes occur in neuromuscular control. Our approaches combining wearable sensors and machine learning could be ultimately applied to facilitate the design of exoskeletons that can provide seamless assistance for people with motor disorders.