Skip to main content

Robust Learning for Safe Control

Time: Wed 2021-05-19 16.00 - 17.00

Location: Online (Zoom): https://kth-se.zoom.us/j/69271183114

Participating: Prof. Nikolai Matni, University of Pennsylvania

Export to calendar

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. Prior to joining Penn, Nikolai was a postdoctoral scholar in EECS at UC Berkeley. He has also held a position as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of safety-critical data driven autonomous systems. Nikolai is a recipient of the NSF CAREER Award (2021), a Google Research Scholar Award (2021), the IEEE ACC 2017 Best Student Paper Award (as co-advisor), and the IEEE CDC 2013 Best Student Paper Award (first ever sole author winner).

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) efficiently learning stability certificates (e.g., Lyapunov or Barrier functions) from data, and (ii) developing novel robust imitation learning algorithms that guarantee that the safety and stability properties of the expert policy are transferred to the learned policy. In both cases, we will emphasize the interplay between robust learning and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Page responsible:Web editors at EECS
Belongs to: Decision and Control Systems
Last changed: Apr 15, 2021