Embodied cognition 3hp

Given by: Hedvig Kjellström

The way humans learn is very much affected by the fact that we have an embodiment - a physical location in the world, and the ability to change the world (both through physical interaction and through spoken and written communication with others).

Ideas about the effect of human embodiment can be used to improve the functionality and learning strategies of artificial embodied systems, such as autonomous cars, humanoid robots, exoskeletons, search and rescue robots, etc.

Embodiment has an effect on our learning in three related ways, which correlate with our three research avenues Robotics, Perception and Learning:

  • Robotics: The embodied perspective is prevalent in the behavior-based robotics approach where robots perform only computations relevant to the task at hand, and reason as little as possible. Moreover, there are many embodied approaches to human-robot interaction, that make use of findings in cognitive neuroscience.
  • Perception: We are able to alter the state of the scene we are observing so as to learn aspects of it that are not apparent from a first look. For example, we can move our head to look from a different angle, or squeeze, push or shake an object to investigate it.
  • Learning: Humans have a very limited communication bandwidth compared to the internal computation capacity in the brain. This means that we cannot easily perform reasoning together with other humans in the way a computer cluster can share computations. It also means that communication between humans is heavily under-determined and error-prone. This limited bandwidth also means that we are forced to learn from quite few examples, and are extremely good at transfer learning and abstraction of knowledge. For example, it has been shown that a child can learn to recognize an unseen animal, e.g., an elephant from a single simple drawing. This indicates that humans makes efficient use of structural knowledge as well as of abstraction and generalization abilities their learning.

This has implications for how to design artificial embodied systems, especially systems that should collaborate with, learn from, and solve problems together with humans. During the course we will explore different aspects of embodied cognition, and implement them in artificial robotic, percieving, and learning systems.

Schedule

The course consists of a series of lectures during April, May and June, and a short project during May and June. Lecture 1 is given by the course leader and introduces the subject. The following 6 lectures are each given by one of the students. Each student is responsible for one 60 minute lecture (including definition of lecture scope, preparation and presentation of the lecture, and selection of reading assignment).

See the project page for more info on the project, which is to be presented in June.

Lecture 1: Introduction (April 28, 13:15-14:45 ca)

What is embodiment, and what implications does it have on cognition? (Hedvig Kjellström)

Reading:

  • N. D. Lawrence. Living Together: Mind and Machine Intelligence. arXiv preprint arXiv:1705.07996v1, 2017.

  • M. V. Butz. Toward a unified sub-symbolic computational theory of cognition. Frontiers in Psychology 7:925, 2016.

Lecture 2: Cognitive and neuroscientific theories (May 5, 15:15-16:45 ca)

In this lecture, human cognitive processes are introduced. We talk about cognitive and neuroscientific discoveries that help us understand how humans explain, understand, and model the world. The second half of the lecture introduced some work on the implementation and integration of these phenomena and processes in (cognitive developmental) robotics (e.g., basic 'theory of mind' in a causal reasoning agent). (Sanne van Waveren)

Reading:

  • Krämer, N. C., von der Pütten, A., & Eimler, S. (2012). Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction. In Human-computer interaction: The agency perspective (pp. 215-240). Springer, Berlin, Heidelberg.

  • Sandini, G., Metta, G., & Vernon, D. (2007). The icub cognitive humanoid robot: An open-system research platform for enactive cognition. In 50 years of artificial intelligence (pp. 358-369). Springer, Berlin, Heidelberg.

  • Goodman, N. D., Baker, C. L., & Tenenbaum, J. B. (2009). Cause and intent: Social reasoning in causal learning. In Proceedings of the 31st annual conference of the cognitive science society (pp. 2759-2764). Austin, TX: Cognitive Science Society.

Lecture 3: Robot-human interaction (May 12, 13:15-14:45 ca)

The role of embodiment in human-robot interaction, together with how it can help to increase the competence of agents, are focused in this lecture. Further, different methods to generate human-like/adapted behaviors are introduced, and their impacts on human-robot interaction are discussed. (Fethiye Irmak Dogan)

Reading:

  • Wainer, J., Feil-Seifer, D.J., Shell, D.A. and Mataric, M.J. (2006). The role of physical embodiment in human-robot interaction. ROMAN.

  • Kozima, H. and Zlatev, J. (2000). An epigenetic approach to human-robot communication, ROMAN.

  • Huang, C.M. and Mutlu, B. (2014). Learning-based modeling of multimodal behaviors for humanlike robots. International Conference on Human-Robot Interaction (HRI).

Lecture 4: Active perception 1 (May 19, 10:00-11:30 ca)

This lecture gives an overview of the main concepts of Active and Interactive Perception. The first half of the lecture introduces the early definition of Active Perception as an intelligent data acquisition process followed by mentioning its basic elements. In the second half, we introduce the idea of Interactive Perception where agents should actively interact with their environment by manipulating objects to reach their desired goals. (Marcus Klasson)

Reading:

  • Bajcsy, Ruzena. "Active perception." Proceedings of the IEEE 76.8 (1988): 966-1005.

  • Bajcsy, Ruzena, Yiannis Aloimonos, and John K. Tsotsos. "Revisiting active perception." Autonomous Robots 42.2 (2018): 177-196.

  • Bohg, Jeannette, et al. "Interactive perception: Leveraging action in perception and perception in action." IEEE Transactions on Robotics 33.6 (2017): 1273-1291.

Lecture 5: Active perception 2 (May 26, 10:00-11:30 ca)

In this lecture I will focus mainly on the perception-action loop. The first part of the lecture will discuss the coupling of perception and action in cognitive agents and how this affects external behaviour. The second part of the lecture will discuss the information theoretic applications of the perception-action loop, in particular with respect to multi-agent scenarios and self-organization. (Jesper Karlsson)

Reading:

  • Cedar Riener, Jeanine Stefanucci. 01 May 2014, The Routledge Handbook of Embodied Cognition Routledge, Chapter 10: Perception and/for/with/as Action

  • Vernon, David, et al. "Embodied cognition and circular causality: on the role of constitutive autonomy in the reciprocal coupling of perception and action." Frontiers in psychology 6 (2015): 1660.

  • Capdepuy, Philippe, Daniel Polani, and Chrystopher L. Nehaniv. "Perception–action loops of multiple agents: Informational aspects and the impact of coordination." Theory in Biosciences 131.3 (2012): 149-159.

  • Ay, Nihat, Ralf Der, and Mikhail Prokopenko. "Guided self-organization: perception–action loops of embodied systems." (2012): 125-127.

Lecture 6: Learning iteratively and from sparse data 1 (June 2, 10:00-11:30 ca)

This lecture will discuss the topic "Learning iteratively and from sparse data" from two different viewpoints. First, we will discuss the cognitive science viewpoint by shedding light on the key ingredients of human intelligence that could be central to human-like machine intelligence. We will review Bayesian program learning and structured representation learning to address the computational approaches that aim at verifying the need for these key ingredients. The second part of the lecture will discuss approaches from the reinforcement learning field. Batch/Safe-RL and model-based RL chosen as representative approaches will cover the second viewpoint on this topic. (Sarah Gillet)

Reading:

  • B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Behav. Brain Sci., vol. 40, no. 2012, pp. 1–58, 2017.

  • B. M. Lake, N. D. Lawrence, and J. B. Tenenbaum, “The Emergence of Organizing Structure in Conceptual Representation,” Cogn. Sci., vol. 42, pp. 809–832, 2018.

  • B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, “The Omniglot challenge: a 3-year progress report,” Curr. Opin. Behav. Sci., vol. 29, pp. 97–104, Oct. 2019.

  • B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, “Human-level concept learning through probabilistic program induction,” Science, vol. 350, no. 6266, pp. 1332–1338, Dec. 2015. 

Lecture 7: Learning iteratively and from sparse data 2 (June 9, 10:00-11:30 ca)

This lecture covers two main themes: 1) data-efficient active learning with Bayesian optimization and 2) cons & pros of incorporating prior knowledge vs learning from scratch.First, the lecture explains the Bayesian optimization (BO) algorithm. A short exercise is given to help getting familiar with the algorithm. Then, examples of using BO to optimize robot controllers are illustrated. The second part of the lecture considers the debate of using prior knowledge vs learning from scratch. It takes a perspective that is broader than the question of only incorporating embodiment-related knowledge. Nonetheless, this debate is still closely related to embodiment themes. (Rika Antonova)

Reading part 1: Introduction to Bayesian Optimization

  • Taking the Human Out of the Loop: A Review of Bayesian Optimization. Shahriari B, Swersky K, Wang Z, Adams RP, De Freitas N. Proceedings of the IEEE. 2015 Dec 10; 104(1):148-75. 
  • The above is the standard option with formal mathematical notation. Here is an accessible alternative intro to Bayesian Optimization - a video lecture from Gaussian Process Summer School 2019: https://www.youtube.com/watch?v=EnXxO3BAgYk (GPSS19 website: http://gpss.cc/gpss19/program).

Reading part 2: Limits and potential of unsupervised representation learning

  • The Limits and Potentials of Deep Learning for Robotics. Sünderhauf N, Brock O, Scheirer W, Hadsell R, Fox D, Leitner J, Upcroft B, Abbeel P, Burgard W, Milford M, Corke P. International Journal of Robotics Research. 2018 Apr; 37(4-5):405-20. 
  • The above summarizes robotics-specific challenges, but I would like to also reflect on the broader implications. So during the lecture I will also refer to the main point of this debate: Artificial Intelligence Debate - Yann LeCun vs. Gary Marcus - Does AI Need More Innate Machinery? https://www.youtube.com/watch?v=aCCotxqxFsk [ Please watch the intro part; watching the whole video is not compulsory, but if you find the intro intriguing, you will likely be compelled to watch the whole debate. I think it is quite interesting and would be worth your time :-] The arguments for adding more domain knowledge are usually very easy to understand. But if you feel that the other side of the argument is less clear - please feel free to watch this part of an interview with David Silver: https://www.youtube.com/watch?v=uPUEq8d73JI&t=4533.

Seminar: Project presentations (June 22, 10:00-13:00 ca)

See the page Project for more info.

Feedback Nyheter