Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

Open Master Theses

Learning Fruit Handling Skills from Human Demonstrations

Manipulation skills are ubiquitous in agriculture activities, e.g. fruit transporting, harvesting and plant pruning. Machine learning methods, i.e. learning from demonstrations, offer the possibility of intuitively transferring that from human operators to the robots and adapt the skills to handle environment variations. The goal of this thesis is to implement a learning and adaptation algorithm to teach skillful trajectories to a mobile robot manipulator in a scene of harvesting grapes. The algorithm is based on probablistic model so as to cope with uncertainty and diversity in human motions, such as task-parameterized Gaussian Mixture Model [1] or other generative models based on deep neural networks [2]. The following steps are expected to be carried out:

1. Using ROS and IK libraries to control a simulated dual-arm robot in our VR environment featuring a grape field. Collect a motion dataset of demonstrating how to move the robot for goals like reaching the pruning point and placing the fruit to the logistic robot.

2. Implementing one of the above mentioned algorithms and train the model on top of the collected data.

3. Validate the trained model in a series of experiments, such as motion reproduction and adaptation to shifted location of grape bunch or logistic robots. Depending on the progress, a real hardware validation in our collaborative lab would be possible as well.

Required qualifications: proficiency in Robotics, Machine Learning and Data Science. Applicants are expected to have passed KTH courses such as Introduction of Robotics, Machine Learning, Project Course in Robotics/Data Science. Experience on projects related to manipulator kinematics, control and ROS experience will be a plus.

Contact person: Hang Yin (KTH) hyin AT kth.se and Alfredo Rechlin (KTH) alfrei AT kth.se

References

[1] Calinon, Robot Learning with Task-Parameterized Generative Models, ISRR 2018

[2] Bütepage et al, Imitating by generating: Deep generative models for imitation of interactive tasks, Frontiers in Robotics 2020

 

 

 

Exploring Learning-based Koopman Control in Robotics

Problems involving robotic systems often exhibit significant nonlinearity that is hard and expensive to solve. Recent advances in Koopman theory and control applications show the possibility of transforming the original formulation to a linear or convex form which is expected to be more amenable [1][2]. Machine learning methods provide neural models to build rich Koopman operators [3][4] and to parameterize structured deep reinforcement learning policies [5]. This thesis aims to investigate the best practice of learning Koopman representations in the context of robotics tasks, in which the effectiveness of using standard neural networks remain elusive [6]. We expect to explore a direct comparison between selected and learned basis in the context of controlling robotic systems, as well as the effects of various practical considerations, such as multi-step prediction loss, size of Koopman approximation and horizon of model predictive control. The following steps are planned  in this thesis:

1. Collecting data from gym robotic tasks (HalfCheetah, Ant, Humanoid etc.) to fit Koopman dynamics with both selected basis and deep learning methods.

2. Investigate the prediction accuracy and control performance between these two choices and impacts of decoders, length of Koopman representations and design of prediction loss.

3. Depending on the progress, explore the effects of using some structured neural networks instead of standard deep neural networks.

Required qualifications: proficiency in Robotics, Machine Learning and Control. Applicants are expected to have passed KTH courses such as Introduction of Robotics, Machine Learning, Project Course in Automatic Control.

Contact person: Hang Yin (KTH) hyin AT kth.se and Michael Welle (KTH) mwelle AT kth.se

[1] Proctor et al, Generalizing Koopman Theory to Allow for Inputs and Control, SIAM Applied Dynamical Systems 2018

[2] Abraham and Murphey, Active Learning of Dynamics for Data-Driven Control Using Koopman Operators, IEEE Transactions on Robotics 2019

[3] Lush et al, Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications 2018

[4] Li et al, Learning Compositional Koopman Operators for Model-Based Control, ICLR 2020

[5] Yin et al, Embedding Koopman Optimal Control in Robot Policy Learning, IROS 2022

[6] Han et al, DeSKO: Stability-Assured Robust Control with a Deep Stochastic Koopman Operator, ICLR 2022


Profilbild av Hang Yin

Portfolio

  • Open Master Theses