IEEE Final Year PhD student Seminars
The purpose of this seminar series is twofold, first to make new robotics research accessible and visible across industry and academia in Sweden, and second to do this at a time when young promising researchers are about to make key career choices, such as deciding whether to go to industry or academia and whether to stay in Sweden or travel abroad. Thus if you want to hire a brilliant young mind, some (but not all) of these people might still be open to suggestions.
Finally, if you want to give a seminar, or know a PhD student that is in her/his last year, just let us know.
Presenters so far (details below)
- Alberta Longhini (13/12, 15:00)
- David Bergström (6/12, 15:00)
- Parag Khanna (6/12, 15:30)
- Niklas Persson (29/11, 15:00)
- Frank Jiang (29/11, 15:30)
- Maximilian Diehl (22/11, 15:00)
- Daniel Arnström
- Matthias Mayr
- Albin Dahlin
- Sriharsha Bhat
- Sanne van Waveren
Title: Towards General Manipulation of Deformables
Speaker: Alberta Longhini, KTH
Time: 15:00 on Friday the 13th of December, 2024
Link: https://kth-se.zoom.us/j/68109660419
Abstract:
In caregiving, industrial, and household environments, robots are increasingly tasked with manipulating deformable objects, such as folding laundry or assisting with dressing. Achieving proficiency in these applications hinges on advancing deformable object manipulation, which presents distinct challenges due to the high-dimensional freedom of textiles and their complex, often occluded configurations that arise from their deformable nature. Despite their prevalence in everyday settings, the manipulation of deformable objects lags significantly behind that of rigid objects. As the demand grows for automated solutions to address workforce shortages and support caregiving, particularly for an aging population, developing adaptive strategies for textile manipulation has become essential to realizing effective automation in human-centered environments.
This talk will cover recent advancements in robotic textile manipulation, with a focus on the perceptual and control challenges associated with adapting to textile variability. I will discuss frameworks developed to generalize across diverse textile properties, enabling robots to dynamically adjust their manipulation strategies based on variations in material properties. By incorporating textile-specific characterization methods and leveraging advanced sensing and modeling, these systems aim to bridge the gap between simulated and real-world textile interactions. Emphasis will be placed on the critical role of closing the action-perception loop, which allows robots to adapt their actions online, enhancing generalization and adaptability in tasks such as laundry folding and textile sorting.
Bio:
Alberta Longhini is a Ph.D. candidate in the Division of Robotics, Perception, and Learning at KTH Royal Institute of Technology, supervised by Professor Danica Kragic. Her research centers on developing adaptive manipulation strategies for deformable objects, particularly textiles, with a focus on integrating perception, modeling, and control techniques to enable robots to generalize their handling skills across a variety of materials. Recently, she has been interested in expanding robots' ability to interpret a wider range of instructions for deformable object manipulation, with the goal of enhancing robot autonomy and facilitating richer communication with humans to handle diverse and variable tasks in real-world settings.
Title: Enhancing Human-Robot Interaction via Adaptive Human-Robot Handovers
Speaker: Parag Khanna
Time: 15:30 on Friday the 6th of December, 2024
Link:
Abstract:
As robots become more capable with technology, their presence in human environments is expected to increase, leading to more physical and social interactions between humans and robots. In these shared spaces, handovers—the act of transferring an object from one being to another—constitute a significant part of daily human interactions. My research focuses on enhancing human-robot interaction by drawing inspiration from human-to-human handovers.
In this seminar, I will present my research on adaptive robot grip-release, specifically addressing when a robot should release an object as a human recipient begins to take it during a handover. I will introduce a data-driven grip-release strategy developed from the analysis of over 4,000 human handovers, which has been experimentally validated in human-robot interactions. To further refine this strategy for different object weights, I recorded additional handovers involving various weights, resulting in publicly available datasets. Furthermore, I will discuss how object weight affects human motion during handovers and how these insights can inform adaptive robot motion and how robots can observe changes in human motion to estimate object weights during handovers. Additionally, I will present the use of non-touch modalities, such as EEG brain signals and gaze tracking, to discern human intentions during handovers, specifically differentiating between the intention for motion intended for handovers and those that are not. Lastly, I will address the use of human-robot handovers to resolve robotic failures by providing explanations for these failures and adapting these explanations based on human behavioral responses.
Bio:
Parag Khanna is a final-year PhD student in the Division of Robotics, Perception, and Learning (RPL) at the School of Electrical Engineering and Computer Science at the KTH Royal Institute of Technology in Sweden. He is also involved in the Advanced Adaptive Intelligent Systems project with Digital Futures KTH. His research interests encompass human-robot interaction and collaboration, robotic system development, and machine learning.
Before joining KTH in 2021, Parag worked as a research engineer at the CNRS-LS2N lab in Nantes, France. He holds a dual Master’s degree from the École Centrale de Nantes, France, and the University of Genoa, Italy, which he earned through the Erasmus Mundus European Master on Advanced Robotics (EMARO) program in 2019. His work was awarded the Best Poster Paper Award at the Human-Agent Interaction (HAI) 2023 conference, and he was selected to represent KTH at the Erasmus-Unite! Research School 2024. During his undergraduate studies at Visvesvaraya National Institute of Technology (VNIT) in India, he was honored with the VNIT Excellence Award in 2017.
Title: Large-scale Generative Models for Human Motion Generation
Speaker: David Bergström, LiU
Time: 15:00 on Friday the 6th of December, 2024
Link:
Abstract: Data on how we navigate urban environments, i.e. human motion data, is crucial for urban planning, optimizing public transport and for building autonomous systems. While GPS trajectories can be a great source of motion data, it is often highly entangled with private and sensitive information. This makes it, rightly so, unavailable to the general public. Generative AI shows promise by making it possible to create synthetic datasets, retaining the usefulness of the original data while being free from all sensitive attributes. However, in practice these models struggle to model more complex datasets. In the case of human motion data, this means the resulting synthetic journeys are overly smoothed and simplified as well as physically infeasible, crossing water and going through buildings. In this seminar, we present a scalable multi-step approach to generating representative and privacy-preserving synthetic GPS trajectories. First, we divide the city into several areas and calculate an aggregate statistic. Second, for each area, we generate journeys for that area by conditioning the generative model on the corresponding aggregate statistic. Last, the samples from all areas are combined into a single representative dataset. We also highlight the utility of our approach by generating journeys for a previously unseen part of the city using only the aggregate statistics, generating privacy-preserving journeys by applying k-privacy to the statistics and by generating data for hypothetical what-if scenarios.
Bio:
David Bergström is a final year PhD-student at the Reasoning and Learning Lab at Linköping University, working under the supervision of Professor Fredrik Heintz and Mattias Tiger, PhD. His research focuses on generative models, with applications in modeling human motion and robotics. His interest in AI began with robot football, competing in Robocup 2016 and Robocup 2017. He then went on to write his master's thesis on how Bayesian optimization can be used to efficiently select data for machine learning systems. David is particularly interested in robust and interpretable machine learning methods that combine search, Bayesian statistics and reasoning.
Title: Human-Centric Specification Framework for Connected Vehicles
Speaker: Frank Jiang, KTH
Time: 15:30 on Friday the 29th of November, 2024
Link: (YouTube)
Abstract:
In this talk, we will delve into the formal verification and control set synthesis methodology we have been working on over the last couple of years to support the development and deployment of connected vehicle applications. In particular, we are motivated by the fact that, as automated vehicles continue to gain more use in different industries, we are getting more evidence that remote human operators will continue to have a prominent role in the deployment of automated vehicle fleets. By introducing such operators, we find that we are able to mitigate a variety of complex scenarios, but at the expense of introducing new human error back into the system. To address this challenge, we leverage a new formal verification tool called temporal logic trees. Specifically, through a process of constructing temporal logic trees using Hamilton-Jacobi reachability analysis, we formulate a framework where operators can describe high-level tasks, verify the feasibility of their designed tasks, and synthesize admissible control sets for satisfying the designed tasks. With this framework, we are able to integrate human operators into the operation of connected vehicles in a manner that provides the operators with a large amount of operating freedom while guaranteeing the vehicle's safety.
Bio:
Frank J. Jiang is a final-year doctoral student with the School of Electrical Engineering and Computer Science at the KTH Royal Institute of Technology in Sweden. He is affiliated with the Integrated Transport Research Lab (ITRL) and Digital Futures. He received his B.S. degree in Electrical Engineering and Computer Science from the University of California, Berkeley in 2016, and his M.S. degree in Systems, Control and Robotics from the KTH Royal Institute of Technology in 2019. He is the CEO and Co-Founder of FleetMQ, a company spun out of his PhD research work. His research interests are in formal verification, machine learning, and control, and their applications in robotics and intelligent transportation systems. He received the Best Student Paper Award at the 2020 IFAC Conference on Cyber-Physical-Human Systems (CPHS 2020) and his PhD project was included on the 2022 Royal Swedish Academy of Engineering Sciences 100 list of research projects with potential to create value.
Title: Control and Navigation of an Autonomous Bicycle
Speaker: Niklas Persson, Mälardalen University
Time: 15:00 on Friday the 29th of November, 2024
Link: (YouTube)
Abstract:
Autonomous control of mobile robots is a research topic that has received a lot of interest. There are several challenging problems associated with autonomous mobile robots, including low-level control, localization, and navigation. Most research in the past has focused on developing algorithms for three or four-wheeled mobile robots, such as autonomous cars and differential drive robots, which are statically stable systems. In this seminar, control of an autonomous bicycle is
addressed. The bicycle is a naturally unstable system, and without proper actuation, it will lose balance and fall over. Thus, before developing algorithms for higher-level functionality, such as localization and navigation of an autonomous bicycle, the balance of the bicycle needs to be addressed. This is an interesting research problem as the bicycle is a statically unstable system that has proven difficult to control, but given adequate forward velocity, it is possible to balance a bicycle using only steering actuation.
In this seminar, several controllers for stabilizing an autonomous bicycle are presented. These methods range from traditional control methods like PID and LQR controllers designed on a linear model of a bicycle to more recent proposed control algorithms designed based on data. Data-Enabled Policy Optimization (DeePO) is a direct data-driven adaptive control method that learns the LQR policy from online closed-loop data. The control matrix is updated by computing the gradient based on persistently exciting data. The different control methods are evaluated in realistic simulations and experiments on an instrumented bicycle.
Bio:
Niklas Persson received an M.Sc in Robotics from Mälardalens University in 2019. Since 2020, he has been pursuing a PhD degree in electronics at the Intelligent Future Technologies division of Mälardalens University, working on the control and navigation of autonomous bicycles, with a particular focus on data-driven control approaches. In 2023, he received a Licentiate degree at Mälardalen University. His research interests include autonomous robots and vehicles, control theory, and embedded systems.
Title: Explainable and Interpretable Decision-Making for Robots
Speaker: Maximilian Diehl, Chalmers
Time: 15:00 on Friday the 22nd of November, 2024
Link: (youtube)
Abstract:
Future robots are expected to aid humans in daily chores such as setting the table. Unfortunately, robots that act in human environments are prone to mistakes. For humans, it is challenging to understand why these failures have occurred when robots rely on black-box decision-making methods, which reduces trust and effectiveness in human-robot interactions and limits the human’s capabilities to assist robots in recovering from failures. In this talk, we, therefore, present several of our interpretable and explainable methods that aim to improve the human’s understanding of the robot’s decision-making in order to better react and assist robots, in particular when the robot commits failures.
To improve explainability, cognitive science emphasizes that effective explanations should be contrastive, selective, and expressed through human-understandable abstractions. Additionally, causal models play a key role in providing actionable explanations. We first present our work on enabling robots to build causal models in three ways: learning from simulations, transferring knowledge from semantically similar tasks, or acquiring causal models from human experts. Using these models, we propose techniques for robots to generate contrastive failure explanations and prevent future errors. In the second part of the talk, we will focus on failures during robot task planning because it is unaware of how to perform the whole or parts of the task. To address this issue, we will present our method that allows robots to learn new tasks from human demonstrations by automatically transferring the demonstrations into symbolic planning operators based on interpretable decision trees, both for single and multi-agent setups.
Bio:
Maximilian Diehl is a final-year Ph.D. candidate in the Department of Electrical Engineering at Chalmers University of Technology, under the supervision of Associate Professor Karinne Ramirez-Amaro. He previously earned his Bachelor's and Master's degrees in Electrical Engineering from the Technical University of Munich. His research centers on developing explainable and interpretable methods for robotic decision-making, with a focus on handling task failures in an explainable manner using approaches such as causality and automated planning.
Title: Reliable Active-Set Solvers for Real-Time MPC
Speaker: Daniel Arnström, Linköping University
Time: 15:00 on Friday the 10th of March
Link: https://youtu.be/VYuqE9JWK7o
Abstract:
In Model Predictive Control (MPC), control problems are formulated as optimization problems, allowing for constraints on actuators and system states to be directly accounted for. Implicitly defining a control law through an optimization problem does, however, make the evaluation of the control law more complex compared with classical PID and LQ controllers. As a result, determining the worst-case computational time for evaluating the control law becomes non-trivial, yet such worst-case bounds are essential for applying MPC to control safety-critical systems in real time, especially when the controller is implemented on limited hardware.
The optimization problems that need to be solved in linear MPC are often quadratic programs (QPs), and the corresponding optimization method that is used is often an active-set method.
In this talk we will present a recently developed complexity-certification framework for active-set QP solvers; this framework determines the exact worst-case computational complexity for a family of active-set solvers, which include the recently developed active-set solver DAQP. In addition to being real-time certifiable, DAQP is efficient, can easily be warm-started, and is numerically stable, all of which are important properties for a solver used in real-time MPC applications.
Bio:
Daniel Arnström is a final-year Ph.D. candidate at the Division of Automatic Control at Linköping University. His main research interests are in Model Predictive Control (MPC) and embedded optimization. An overarching objective in his Ph.D. is to ensure that optimization solvers that are employed in real-time MPC applications can reliable find a solution within a limited time frame.
Title: Skill-based Reinforcement Learning with Behavior Trees
Speaker: Matthias Mayr, Lund University
Time: 13:15 on Wednesday the 14th of December
Link: YouTube
Abstract:
Using skills that can be parameterized for the task at hand can be part of the answer to adapt robotic systems to the challenges of Industry 4.0. There are tools for the planning of skill sequences for long-term tasks as well as for incorporating known, excplicit knowledge. But especially skill sequences for contact-rich tasks often contain tacit, implicit knowledge that is difficult to write down explicitly. By combining classical AI techniques such as symbolic planning and reasoning with reinforcement learning, this gap can be adressed. Learning with the robot system and collecting data of the executions can not only make certain tasks possible, but also speed up the execution or minimize interaction forces. The presented work allows to learn robot tasks in simulation and directly on the real robot system. It is integrated in a task planning and reasoning pipeline to leverage existing knowledge and to learn only the missing aspects. The learning formulation allows to formulate multiple objectives of a task and learn for them concurrently. It is possible to inject user priors or past experiences into the learning process and the implementation with behavior trees allows for interpretable executions. Being demonstrated with real robot tasks, the work shows a way for robot systems to efficiently learn behaviors that are robust, efficient and interpretable.
Bio:
Matthias Mayr studied electrical engineering and information technology at the Karlsruhe Institute of Technology (KIT) in Germany. Already early in his studies he became affiliated with the robotics institute and wrote his bachelor thesis using a Turtlebot in Halmstad. During his time at Siemens in Berkeley he learned about knowledge representation and implemented AR + VR applications. In 2018 he started his PhD in the field of industrial robots and skill-based systems at the Robotics and Semantic Systems group at Lund University and in the WASP research program. In his PhD he focuses on the combination of AI techniques such as symbolic planning and knowledge representation with reinforcement learning.
Title: Computationally efficient navigation in dynamic environments
Speaker: Albin Dahlin, Chalmers
Time: 15:00 on Monday the 12th of December
Link: YouTube
Abstract:
Navigating autonomous agents to a goal position in a dynamic environment with both moving obstacles, such as humans and other autonomous systems, and static obstacles is a common problem in robotics. A popular paradigm in the field of motion planning is potential field (PF) methods which are computationally lightweight compared to most other existing methods. Several PF variants can provide guarantees for obstacle avoidance in combination with converging motions to a goal position from any initial state. The convergence properties when moving in a world cluttered by obstacles commonly rely on two main assumptions: all obstacles are disjoint and have an appropriate shape (starshaped). Closely positioned obstacles may however be seen as having intersecting regions since obstacles typically are inflated by the robot radius and a possible extra safety margin. To preserve both collision avoidance and convergence properties in practice, the obstacle representations must therefore online be adjusted to fit into the world assumptions.
An alternative approach for online collision avoidance, which has become popular with the increase of computational power, is Model Predictive Control (MPC). Compared to the PF methods, MPC allows for an easy encoding of the system constraints and allows to encode "preferred motion behaviors" into a cost function.
This talk addresses how to properly adjust the workspace representation and look further into how the PF approaches can be combined with MPC to leverage both the lightweight convergence properties of PF and the MPC-enabled simple encoding of preferred system motion behaviors.
Bio:
Albin Dahlin is a PhD student with the Division of Systems and Control, Department of Electrical Engineering at Chalmers University of Technology, working under the supervision of Associate Professor Yiannis Karayiannidis. His research is mainly focused on online motion planning. His other major research interests include programming by demonstration and multi-agent robotic systems.
Title: Realtime Simulation and Control of Autonomous Underwater Vehicles for Hydrobatics
Speaker: Sriharsha Bhat, KTH
Time: 14:00, Friday, 2nd of December
Link: YouTube
Abstract:
The term hydrobatics refers to the agile maneuvering of underwater vehicles. Hydrobatic capabilities can enable underwater robots to balance energy efficiency and precision maneuvering. This can open the door to exciting new use cases for autonomous underwater vehicles (AUVs) in inspecting infrastructure, under-ice sensing, adaptive sampling, docking, and manipulation. These ideas are being explored at KTH in Stockholm within the Swedish Maritime Robotics Centre(SMaRC), and Sriharsha will present his ongoing PhD work on hydrobatics in this talk. Modeling the flight dynamics of hydrobatic AUVs at high angles of attack is a key challenge - Simulink and Stonefish are used to perform real-time simulations of hydrobatic manoeuvres. Furthermore, these robots are underactuated systems, making it more difficult to obtain elegant control strategies - we can use model predictive control(MPC) and reinforcement learning to generate optimal control actions. The controllers and simulation models developed are tightly linked to SMaRC’s AUVs through ROS, enabling field deployment and experimental validation. Currently, the focus is to deploy hydrobatic AUVs in the use cases described above.
Bio:
Sriharsha Bhat is a PhD student in Marine Robotics at KTH Royal Institute of Technology since 2018. He obtained his bachelor’s degree in Mechanical Engineering from the National University of Singapore in 2013 and his master’s degree in Vehicle Engineering from KTH in 2016. He has prior work experience as a Research Engineer at the Singapore MIT Alliance for Research and Technology (Singapore) and as a Technology Development Engineer at Continental in Hannover, Germany. His research interests lie in simulation, planning and control of underwater robots in challenging applications including adaptive sampling, infrastructure inspections, seafloor imaging and glacier front mapping.
Title: Leveraging Non-Expert Feedback to Correct Robot Behaviors
Speaker: Sanne van Waveren
Time: November 23:rd, at 15:00
Link: youtube
Abstract: Robots that operate in human environments need the capability to adapt their behavior to new situations and people’s preferences while ensuring the safety of the robot and its environment. Most robots so far rely on pre-programmed behavior or machine learning algorithms trained offline. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors prior to deployment, causing them to inevitably fail at some point in time.
Typically, experts are called in to correct the robot’s behavior and existing correction approaches often do not provide formal guarantees on the system’s behavior to ensure safety. However, in many everyday situations we can leverage the feedback from people who do not necessarily have programming or robotics experience, i.e., non-experts, to synthesize correction mechanisms that constrain the robot’s behavior to avoid failures and to encode people’s preferences on the robot’s behavior. My research explores how we can incorporate non-expert feedback in ways that ensure that the robot will do what we tell it to do, e.g., through formal synthesis.
In this talk, I will describe how we can correct robot behaviors using non-expert feedback that describes either 1) how the robot should achieve its task (preferences and decision-making) and 2) what the robot should do (task goals and constraints). We show the promise of non-expert feedback to synthesize correction mechanisms to shield robots from executing high-level actions that lead to failure states. Furthermore, we demonstrate how we can encode driving styles into motion planning for autonomous vehicles using temporal logics and present a framework that allows non-expert users to quickly define new task and safety specifications for robotic manipulation using spatio-temporal logics, e.g., for table-setting tasks.
Bio: Sanne van Waveren is a final-year Ph.D. candidate at KTH Royal Institute of Technology in Stockholm. In her Ph.D., she explores how non-experts can correct robots and how people's preferences can be encoded into the robot's behavior while ensuring safety, e.g., through formal synthesis. Her research combines concepts and techniques from human-robot interaction, formal methods, and learning to develop robots that can automatically correct their behavior using human feedback.