Period: 2016-2020

PI: Danica Kragic Jensfelt, Patric Jensfelt

In the envisioned future factory setups, humans and robots will share the same workspace and perform object manipulation tasks jointly. Classical robot task programming requires an experienced programmer and a lot of tedious work. Programming robots through human demonstration has been promoted as a flexible framework that reduces the complexity of programming robot tasks and allows end-users to control robots in a natural and easy way without the need for explicit programming. Thus, one of the main enabling technologies necessary to realize this is the design of a framework that enables the robot to cooperate smoothly with the human, working towards the same goal, which may not be explicitly communicated to the robot before the task is initiated.

The traditional approach is to let the robot be a passive agent in the interaction while the human agent controls the motion of the object. Unlike in the conventional program development process, when programming robots through demonstration the user may not be familiar with the syntax and semantics of the programming language. The main objective here will be to study novel methods for learning and encoding tasks from multiple demonstrations and to use explicit communication (natural language) as well as implicit communication (motion).

When two or more humans perform an object manipulation task together, the role of leader and follower may typically alternate between the two agents, depending on task geometry, load distribution, limited observability, etc. For human-robot collaboration to become as efficient as human-human collaboration, a robot must be able to perform both the active and passive parts of the interaction, just as a human would. For the robot to take the active part in the interaction, and to be able to plan and execute trajectories of the object, it must have knowledge about the passive agent, its internal state and what constraints the human imposes on the object.

To build a system which these capabilities requires research beyond the state-of-the-art in the areas of object handling and manipulation; programming by demonstration; natural and embodied interaction; human aware navigation.

For more information, have a look at Robot systems for future factories .

Belongs to: Robotics, Perception and Learning
Last changed: Jan 21, 2019