Skip to main content

RoboGround - Robot learning of symbol grounding in multiple contexts through dialog

The aim of this project is to study computational models that allow a robot to ground language in the physical world in robot-human interaction. Within this general aim, this study will address in particular two challenging specific objectives: (i) learning grounding through active human-robot dialog, and (ii) learning how grounding depends on context.

A fundamental requirement for any intelligent system that is situated in a physical environment – and that should also be able to reason symbolically about this environment or communicate about it using symbolic language – is that it can understand the relationship between these symbols, the objects or phenomena they denote, and their properties.

As humans, we typically learn these relationships though dialog with other humans who share the same symbol system. For a robot or intelligent system to be able to interact with other humans in their language, it is important that it can learn to adopt their symbol system and how it is grounded. Studies on children language learning have shown that children typically learn language implicitly by observing other agents communicate in a situated environment, or by taking part in interaction with other agents and learn to gradually adopt their language use. Learning however can be made more effective by combining implicit learning with explicit teaching acts, like pointing at an object and stating the name of it or providing a linguistic explanation of an object or a concept.

It is also important to stress that it is not only children that do this. Language is not a fixed product, it is continuously evolving as humans have to interact in new environments and must be able to talk about new objects and phenomena. Gradually, the language changes between generations and diverges between different cultures. In general, the meaning associated to terms may strongly depend on the context, including the linguistic context, cultural context of the context of the current task.

We claim that learning the meaning of language in different contexts is essential for any robot or intelligent system meant to collaborate with humans.


Alessandro Saffiotti (Örebro University)

Funding: WASP

Duration: 2019 - 2024

Page responsible:Web editors at EECS
Belongs to: Speech, Music and Hearing
Last changed: Feb 14, 2020