@ CORALL - COllaborative Robot-Assisted Language Learning
Collaborative Robot-Assisted Language Learning
The societal purpose of CORALL is to contribute to more effective education of Swedish for Immigrants by combining pedagogy of collaborative learning with technology for computer-assisted language learning and social robotics.
The scientific aims of the CORALL project are to:
- introduce robot tutors in spoken communication training
- explore collaborative learning with two learners and a robot tutor
- adapt practice to the learners' engagement and understanding of the interaction
The collaborative learning set-up, focuses on functional social communication. The practice targets robot-assisted human-human communication, in which the robot tutor can initiate, support and monitor the interaction between the two learners, and learners can support each other in their learning, on both communicative and linguistics aspects.
The technological part
The Furhat robotic head consists of a 3D-printed mask on which a computer-animated face is back-projected. This allows for natural face gestures, and most importantly for L2 learning, appropriate lip movements. Thanks to a motor in the neck, Furhat can rotate the head to face the interlocutor, and tilt it to face the task interaction board, on which collaborative tasks are presented. Speech technology components (automatic speech recognition and text-to-speech synthesis and a spoken dialogue system framework) enables Furhat to talk with the learners and a Kinect 3D camera and computer vision software allows the robot to see them, their interaction and their reactions to the interaction.
The pedagogical part
The pedagogical aspects of the research project consist of
- designing relevant, engaging conversation practice.
- defining robot interaction strategies, to determine how the robot should act to support the learners’ interaction.
- modelling and tracking the learners’ motivation state, using speech recognition of the learners’ verbal and non-verbal acoustic output as well as computer vision analysis of face expressions, head and body posture etc, in order to be able to adapt the practice to the learners’ feelings about the practice.
- evaluating the learners' view of the practice.
 Robot interaction styles for conversation practice in second language learning
O Engwall, J Lopes, A Åhlund (2020)
International Journal of Social Robotics 13 (2), 251-276
 Interaction and collaboration in robot-assisted language learning for adults
O Engwall, J Lopes (2020)
Computer Assisted Language Learning, 1-37
 Learner and teacher perspectives on robot-led L2 conversation practice
O Engwall, J Lopes, R Cumbal, G Berndtson, R Lindström, P Ekman, E Hartmanis, E Jin, E Johnston, G Tahir, M Mekonnen
Cambridge University Press
 Identification of low-engaged learners in robot-led second language conversations with adults
O Engwall, R Cumbal, J Lopes, M Ljung, L Månsson (2022)
ACM Transactions on Human-Robot Interaction
 Is a wizard-of-Oz required for robot-led conversations in a second language?
O Engwall, J Lopes, R Cumbal (2022)
International Journal of Social Robotics
 Robot Gaze Can Mediate Participation Imbalance in Groups with Different Skill Levels
S Gillet, R Cumbal, A Pereira, J Lopes, O Engwall, I Leite (2021)
HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 303–311
 "You don’t understand me!": Comparing ASR results for L1 and L2 speakers of Swedish
R Cumbal, B Moell, J Lopes, O Engwall (2021)
Proc. Interspeech 2021, 4463-4467
 Uncertainty in robot assisted second language conversation practice
R Cumbal, J Lopes, O Engwall (2020)
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 171-173
 Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice
R Cumbal, J Lopes, O Engwall (2020)
Proceedings of the 2020 International Conference on Multimodal Interaction, 625–629,
 A First Visit to the Robot Language Café.
Lopes, J., Engwall, O., Skantze, G. (2017)
Proc. 7th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2017), 7-12,