The end of the road for feature map theories? Attention is guided by relative features

Colloquium
Date: 
18 June 2013
Begin time: 
18:15
Room: 
UHG C01-148

Abstract

The visual search literature has been dominated by feature-map theories such as Feature Integration Theory (FIT) and Guided Search, which essentially assume that attention is guided by sensory neurons that respond to specific feature values. A central assumption of these theories is that different feature values such as red and green are
encoded independently of each other, by different neuronal populations. By contrast, Becker (2010) proposed a relational theory of attention and eye movements, which centrally posits that elementary features are encoded relative to the features in the context, and that attention is guided by these relative properties (e.g., larger/redder/darker). A series of visual search experiments supports the relational theory, showing that attention is indeed biased to the relative target attributes in a range of different search tasks. Importantly, feature relationships determine automatic feature priming effects as well as contingent capture: First, with regard to feature priming effects, intertrial switch costs occur only when the target-nontarget relationships reverse (e.g., lighter/darker; smaller/larger), not when the features of the target or the nontargets change. Second, feature relationships also determine capture: Irrelevant distractors can capture attention when they match the relative colour or shape of the target, whereas they fail to capture when they mismatch the relative attributes of the target (even when the distractor is identical to the target feature). Taken together, the available evidence invalidates the prevalent feature-map theories and demonstrates that stimuli are encoded in a context-dependent manner in a wide range of stimulus conditions and tasks.