Guest Talk Nikolina Mitev

Public Event
11 July 2018
Begin time: 

Towards efficient human-machine collaboration: Effects of gaze-driven feedback and engagement on performance.

Collaborative task solving in humans takes place in a shared environment and requires referential success, which involves different modalities. We investigate how users engage with an artificial instruction-giving system and how they exploit gaze-based feedback. Specifically, we implement a system that generates spoken instructions on the fly, describing objects in the real world to a human listener. Moreover, the system monitors and evaluates listener gaze to compute and output verbal feedback based on object inspections. We study unambiguous vs. ambiguous instructions supplemented by two levels of feedback specificity. System’s responses were either underspecified (“No, not that one!”) or more informative, contrastive responses relative to the position of the currently inspected object (“Further left!”). Results showed that ambiguous instructions complemented with only underspecified feedback were not beneficial for task performance. In contrast, providing consistently contrastive feedback after ambiguous instructions resulted in faster interactions and even outperformed the unambiguous strategy (manipulation between subjects). However, when the system alternated the underspecified and contrastive responses in an interleaved manner (within subjects), task performance and gaze behavior became similar for both conditions. This suggests that listeners engage more intensely with the system when expecting and (sometimes) receiving helpful information and that this, rather than the actual informativity of the spoken feedback, determines efficient information uptake and performance.