Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

IEEE Final Year PhD student Seminars

The purpose of this seminar series is twofold, first to make new robotics research accessible and visible across industry and academia in Sweden, and second to do this at a time when young promising researchers are about to make key career choices, such as deciding whether to go to industry or academia and whether to stay in Sweden or travel abroad. Thus if you want to hire a brilliant young mind, some (but not all) of these people might still be open to suggestions.

Finally, if you want to give a seminar, or know a PhD student that is in her/his last year, just let us know.

Presenters so far (details below)

  • Daniel Arnström
  • Matthias Mayr
  • Albin Dahlin
  • Sriharsha Bhat
  • Sanne van Waveren

Title: Reliable Active-Set Solvers for Real-Time MPC

Speaker: Daniel Arnström, Linköping University
Time: 15:00 on Friday the 10th of March
Link: https://youtu.be/VYuqE9JWK7o

Abstract:
In Model Predictive Control (MPC), control problems are formulated as optimization problems, allowing for constraints on actuators and system states to be directly accounted for. Implicitly defining a control law through an optimization problem does, however, make the evaluation of the control law more complex compared with classical PID and LQ controllers. As a result, determining the worst-case computational time for evaluating the control law becomes non-trivial, yet such worst-case bounds are essential for applying MPC to control safety-critical systems in real time, especially when the controller is implemented on limited hardware.
The optimization problems that need to be solved in linear MPC are often quadratic programs (QPs), and the corresponding optimization method that is used is often an active-set method.
In this talk we will present a recently developed complexity-certification framework for active-set QP solvers; this framework determines the exact worst-case computational complexity for a family of active-set solvers, which include the recently developed active-set solver DAQP. In addition to being real-time certifiable, DAQP is efficient, can easily be warm-started, and is numerically stable, all of which are important properties for a solver used in real-time MPC applications.

Bio: 
Daniel Arnström is a final-year Ph.D. candidate at the Division of Automatic Control at Linköping University. His main research interests are in Model Predictive Control (MPC) and embedded optimization. An overarching objective in his Ph.D. is to ensure that optimization solvers that are employed in real-time MPC applications can reliable find a solution within a limited time frame.

Title: Skill-based Reinforcement Learning with Behavior Trees

Speaker: Matthias Mayr, Lund University
Time: 13:15 on Wednesday the 14th of December
Link: YouTube

Abstract:
Using skills that can be parameterized for the task at hand can be part of the answer to adapt robotic systems to the challenges of Industry 4.0. There are tools for the planning of skill sequences for long-term tasks as well as for incorporating known, excplicit knowledge. But especially skill sequences for contact-rich tasks often contain tacit, implicit knowledge that is difficult to write down explicitly. By combining classical AI techniques such as symbolic planning and reasoning with reinforcement learning, this gap can be adressed. Learning with the robot system and collecting data of the executions can not only make certain tasks possible, but also speed up the execution or minimize interaction forces. The presented work allows to learn robot tasks in simulation and directly on the real robot system. It is integrated in a task planning and reasoning pipeline to leverage existing knowledge and to learn only the missing aspects. The learning formulation allows to formulate multiple objectives of a task and learn for them concurrently. It is possible to inject user priors or past experiences into the learning process and the implementation with behavior trees allows for interpretable executions. Being demonstrated with real robot tasks, the work shows a way for robot systems to efficiently learn behaviors that are robust, efficient and interpretable.

Bio:
Matthias Mayr studied electrical engineering and information technology at the Karlsruhe Institute of Technology (KIT) in Germany. Already early in his studies he became affiliated with the robotics institute and wrote his bachelor thesis using a Turtlebot in Halmstad. During his time at Siemens in Berkeley he learned about knowledge representation and implemented AR + VR applications. In 2018 he started his PhD in the field of industrial robots and skill-based systems at the Robotics and Semantic Systems group at Lund University and in the WASP research program. In his PhD he focuses on the combination of AI techniques such as symbolic planning and knowledge representation with reinforcement learning.

Title: Computationally efficient navigation in dynamic environments 

Speaker: Albin Dahlin, Chalmers
Time: 15:00 on Monday the 12th of December
Link: YouTube

Abstract:
Navigating autonomous agents to a goal position in a dynamic environment with both moving obstacles, such as humans and other autonomous systems, and static obstacles is a common problem in robotics. A popular paradigm in the field of motion planning is potential field (PF) methods which are computationally lightweight compared to most other existing methods. Several PF variants can provide guarantees for obstacle avoidance in combination with converging motions to a goal position from any initial state. The convergence properties when moving in a world cluttered by obstacles commonly rely on two main assumptions: all obstacles are disjoint and have an appropriate shape (starshaped). Closely positioned obstacles may however be seen as having intersecting regions since obstacles typically are inflated by the robot radius and a possible extra safety margin. To preserve both collision avoidance and convergence properties in practice, the obstacle representations must therefore online be adjusted to fit into the world assumptions. 
An alternative approach for online collision avoidance, which has become popular with the increase of computational power, is Model Predictive Control (MPC). Compared to the PF methods, MPC allows for an easy encoding of the system constraints and allows to encode "preferred motion behaviors" into a cost function.
This talk addresses how to properly adjust the workspace representation and look further into how the PF approaches can be combined with MPC to leverage both the lightweight convergence properties of PF and the MPC-enabled simple encoding of preferred system motion behaviors.

Bio:
Albin Dahlin is a PhD student with the Division of Systems and Control, Department of Electrical Engineering at Chalmers University of Technology, working under the supervision of Associate Professor Yiannis Karayiannidis. His research is mainly focused on online motion planning. His other major research interests include programming by demonstration and multi-agent robotic systems.


Title: Realtime Simulation and Control of Autonomous Underwater Vehicles for Hydrobatics

Speaker: Sriharsha Bhat, KTH
Time: 14:00, Friday, 2nd of December
Link:  YouTube

Abstract:
The term hydrobatics refers to the agile maneuvering of underwater vehicles. Hydrobatic capabilities can enable underwater robots to balance energy efficiency and precision maneuvering. This can open the door to exciting new use cases for autonomous underwater vehicles (AUVs) in inspecting infrastructure, under-ice sensing, adaptive sampling, docking, and manipulation. These ideas are being explored at KTH in Stockholm within the Swedish Maritime Robotics Centre(SMaRC), and Sriharsha will present his ongoing PhD work on hydrobatics in this talk.  Modeling the flight dynamics of hydrobatic AUVs at high angles of attack is a key challenge - Simulink and Stonefish are used to perform real-time simulations of hydrobatic manoeuvres. Furthermore, these robots are underactuated systems, making it more difficult to obtain elegant control strategies - we can use model predictive control(MPC) and reinforcement learning to generate optimal control actions. The controllers and simulation models developed are tightly linked to SMaRC’s AUVs through ROS, enabling field deployment and experimental validation. Currently, the focus is to deploy hydrobatic AUVs in the use cases described above.
 
Bio:
Sriharsha Bhat is a PhD student in Marine Robotics at KTH Royal Institute of Technology since 2018. He obtained his bachelor’s degree in Mechanical Engineering from the National University of Singapore in 2013 and his master’s degree in Vehicle Engineering from KTH in 2016. He has prior work experience as a Research Engineer at the Singapore MIT Alliance for Research and Technology (Singapore) and as a Technology Development Engineer at Continental in Hannover, Germany. His research interests lie in simulation, planning and control of underwater robots in challenging applications including adaptive sampling, infrastructure inspections, seafloor imaging and glacier front mapping.

Title: Leveraging Non-Expert Feedback to Correct Robot Behaviors

Speaker: Sanne van Waveren
Time: November 23:rd, at 15:00

Link: youtube

Abstract: Robots that operate in human environments need the capability to adapt their behavior to new situations and people’s preferences while ensuring the safety of the robot and its environment. Most robots so far rely on pre-programmed behavior or machine learning algorithms trained offline. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors prior to deployment, causing them to inevitably fail at some point in time.

Typically, experts are called in to correct the robot’s behavior and existing correction approaches often do not provide formal guarantees on the system’s behavior to ensure safety. However, in many everyday situations we can leverage the feedback from people who do not necessarily have programming or robotics experience, i.e., non-experts, to synthesize correction mechanisms that constrain the robot’s behavior to avoid failures and to encode people’s preferences on the robot’s behavior. My research explores how we can incorporate non-expert feedback in ways that ensure that the robot will do what we tell it to do, e.g., through formal synthesis.

In this talk, I will describe how we can correct robot behaviors using non-expert feedback that describes either 1) how the robot should achieve its task (preferences and decision-making) and 2) what the robot should do (task goals and constraints). We show the promise of non-expert feedback to synthesize correction mechanisms to shield robots from executing high-level actions that lead to failure states. Furthermore, we demonstrate how we can encode driving styles into motion planning for autonomous vehicles using temporal logics and present a framework that allows non-expert users to quickly define new task and safety specifications for robotic manipulation using spatio-temporal logics, e.g., for table-setting tasks.

Bio: Sanne van Waveren is a final-year Ph.D. candidate at KTH Royal Institute of Technology in Stockholm. In her Ph.D., she explores how non-experts can correct robots and how people's preferences can be encoded into the robot's behavior while ensuring safety, e.g., through formal synthesis. Her research combines concepts and techniques from human-robot interaction, formal methods, and learning to develop robots that can automatically correct their behavior using human feedback.