Action representation and focus of attention in the perception of intentions, events and objects

2008-03 till 2012-10
Research Areas: 

This project investigates the influence of movement expertise on visual perception during the observation and/or interaction with objects and ongoing events in the environment. The focus is on the specific interplay between certain parameters of eye movements (measured with a modern mobile eye-tracker) and the quality of mental representation structures during the selective perception of action-relevant information. In a series of recent experiments, we were already able to show expertise-dependent effects of perceptual resonance during the observation of complex motor actions. Thereby, the strength of motor resonance depends directly on the quality of the observer’s own mental skill representation. The expected insights about the cognitive functions of action-based perception are of paradigmatic importance for the establishment of human-machine-interfaces and the development of humanoid robots and intelligent systems.

Methods and Research Questions: 

There is a growing awareness that perceptual-cognitive skills, such as anticipation and decision making are crucial to high-level performance across a range of domains. The question is how are the building blocks of cognitive representation, action-based intelligence, and intelligent interaction established and then used by the human motor system and what role does perception play in this process? To better understand such interactions, it is important to not only examine the perception-based inductions of actions (i.e., motor resonance phenomena), but also to investigate the action-based perception (i.e., perceptual resonance phenomena) of events. Our paradigmatic approach, as well as the (interdisciplinary) design of methods specifically developed for this project, goes beyond the classical view of the perceptual machinery (that simply processes information influx) and the associated experimental paradigms demonstrating the perception-based induction of actions. In further lines of research we follow a multimodal approach by developing interfaces between various technical systems in order to investigate how human manage to select action-relevant information from the steady flow of ongoing events and how the brain makes sense of the signals coming from different sensory modalities. We combine a mobile eye-tracking and a body-tracking system (VICON) to study natural perceptual and sensory-motor behavior in complex tasks that typically involve ccordinated eye-, hand-, and body movements (e.g., grasping of different objects). Furthermore, we combine eye-tracking and EEG to investigate the neurophysiological and perceptual processes when perceiving sports scenes or reading comics. Additionally, we develop new software for eye-tracking systems, like a fully-automatic annotation software for gaze videos. We use mobile and stationary eye-tracking devices to investigate the perceptional processes of experts and novices under various scenarios of sport-related activities, observe motor actions (e.g., motor resonance) by recording pedal-, key- or button presses, and apply the Split Method (SDA-M) to evaluate mental action representation structures in long-term memory.


Over the course of this project, we analyzed how different levels of movement expertise affect the relative influence of top-down mechanisms and bottom-up mechanisms during the perception of complex actions, while measuring the visual attention patterns of experts and novices. Our recent experiments in soccer and handball already revealed expertise dependent effects of perceptual resonance during the observation of complex motor actions. Additionally, the analysis of eye movements revealed group specific differences (experts versus novices) in the visual perception strategy, as well as in various eye parameters, like number of fixations and fixation durations. Furthermore, experts apply a spatial gaze strategy, i.e., they focus on relevant image regions, whereas  novices  consider also task-irrelevant information and social cues in their decision processes (functional gaze strategy).  In order to model how different levels of movement expertise affect the influence of top-down on bottom-up mechanisms  during the perception of complex actions, we extend  a saliency-based  visual attention model by an expertise-based component. Knowledge about the cognitive function of action-based perception and the selection process of action relevant information from the steady flow of ongoing events, is of paradigmatic importance for the establishment of natural and intuitive human-machine interfaces and intelligent systems.