Alignment in AR-based cooperation

For successful communication, interlocutors have to establish co-orientation to the ongoing task and to features of their environment. While in psychology-driven approaches the ability to coordinate attention with one another – in this context termed as “joint attention” – has been predominantly investigated by taking the example of “gaze-following” (Scaife & Bruner 1975), less is known about the contribution of multimodal resources and the fine-grained interactional procedures participants use in order to establish co-orientation both in stable and unstable interactional environments.

In order to investigate the participants’ orienting procedures apart from gaze-coordination, we propose to investigate the phenomenon under the conditions of Augmented Reality (AR). By using our AR-based Interception and Manipulation System (ARbInI) for co-presently interacting participants (Dierker et al. (2009); Neumann (2011)) (cf. Fig.1) we are able to remove, manipulate or add communicatively relevant multimodal information in real-time. Taking the system as a tool for linguistic research (Pitsch et al. 2013), we built on the psycholinguistic tradition of experimenting with communicational parameters (e.g. limiting visual access to the co-participants’ head, hands or material resources /displaying different information /creating unstable interactional environments, etc.).