Skip to main content

MuMi - Multi-agent Multimodal Interaction

Little is known about the role that haptic and auditory feedback can have in collaborative situations and in mediated communication between people. Multi-agent multimodal interaction systems that provide advanced interaction possibilities for manufacturing, for medical applications or for teaching will fundamentally alter the way people work together in the future.

The project proposed here is multidisciplinary, combining research within haptic cognition, perception and analysis with sonification, front-line haptic technology, computer science and computer supported collaborative work. Based on our previous research, two hypothesizes have been formulated that will substantially advance this research field.

First, we argue that it is the superior temporal resolving capacity of audio and haptic modalities together that significantly improves performance in a multi-user setting. Second, several modalities increases the cognitive capacity by supporting divided attention. The two situations investigated will be joint 3D assembly of a complex structure and tracing of particles in a turbulent flow.

Comparative experiments will be performed investigating the effects of haptic, auditory and visual feedback on spatial attention and on collaborative performance and communication. Visual spatial attention data will be measured by an eye-tracking system and will be correlated with haptic spatial attention data.

Team: Multisensory Interaction
Staff:


 Funding: VR ( 2013-5113)
Duration: 2014-01-01 - 2016-12-31

Page responsible:Web editors at EECS
Belongs to: Media Technology and Interaction Design
Last changed: Nov 28, 2018