Skip to main content

Safety Aspects of Data-Driven Control in Contact-Rich Manipulation

Time: Fri 2022-03-04 09.00

Location: U1, Brinellvägen 26, Stockholm

Language: English

Doctoral student: Ioanna Mitsioni , Robotik, perception och lärande, RPL

Opponent: Maximo Roa, DLR German Aerospace Center

Supervisor: Danica Kragic, Centrum för autonoma system, CAS

QC 20220203


A crucial step towards robot autonomy-in environments other than the strictly regulated industrial ones-is to create controllers capable of adapting to diverse conditions. Human-centric environments are filled with a plethora of objects with very distinct properties that can still be manipulated without the need to painstakingly model the interaction dynamics. Furthermore, we do not need an explicit model to safely complete our tasks; rather, we rely on our intuition about the evolution of the interaction that is built upon multiple repetitions of the same task.Accurately translating this ability in how we control our robots in contact-rich tasks is almost infeasible if we rely on controllers that operate based on analytical models of the contacts. Instead, it is advantageous to utilize data-driven techniques that approximate the models based on interactions, much like humans do, and encompass the varying dynamics with a single model. However, for this to be a feasible alternative, we need to consider the safety aspects that occur when we move away from rigorous mathematical models and replace them with approximate data-driven ones.

This thesis identifies three safety aspects of data-driven control in contact-rich manipulation: good predictive performance, increased interpretability for the models, and explicit consideration of safe inputs in the face of modelling errors or uninterpretable predictions. The first point is addressed through a model-training scheme that improves the long-term predictions in a food cutting task. In the experiments it is shown that models trained this way are able to adapt to different dynamics efficiently and their prediction error scales better with longer horizons. The second point is addressed by introducing a framework that allows the evaluation of data-driven classification models based on interpretability techniques. The interpretation of the model decisions helps to anticipate failure cases before the model is deployed on the robot, as well as to understand what the models have learned. Finally, the third point is addressed by learning sets of safe states through data. These safe sets are then used to avoid dangerous control inputs in a control scheme that is flexible and adapts to dynamic variations while effectively encouraging the safety of the system.