Augmented Reality Dialog Interface for Multimodal Teleoperation

Time: Fri 2017-09-29 15.00 - 16.00

Lecturer: André Pereira, Furhat Robotics

Location: Fantum, Lindstedsvägen 24, 5th floor

We designed an augmented reality interface for dialog that enables the control of multimodal behaviors in telepresence robot applications. This interface, when paired with a telepresence robot, enables a single operator to accu- rately control and coordinate the robot’s verbal and nonverbal behaviors. Depending on the complexity of the desired inter- action, however, some applications might benefit from having multiple operators control different interaction modalities. As such, our interface can be used by either a single operator or pair of operators. In the paired-operator system, one operator controls verbal behaviors while the other controls nonverbal behaviors. A within-subjects user study was conducted to assess the usefulness and validity of our interface in both single and paired-operator setups. When faced with hard tasks, coordination between verbal and nonverbal behavior improves in the single-operator condition. Despite single operators being slower to produce verbal responses, verbal error rates were unaffected by our conditions. Finally, significantly improved presence measures such as mental immersion, sensory engage- ment, ability to view and understand the dialog partner, and degree of emotion occur for single operators that control both the verbal and nonverbal behaviors of the robot.

Work done at Disney Research, Pittsburgh.

2017-09-29T15:00 2017-09-29T16:00 Augmented Reality Dialog Interface for Multimodal Teleoperation Augmented Reality Dialog Interface for Multimodal Teleoperation
Top page top