Behavioural, neural, and computational considerations suggest that the visual system may use (at least) two approaches to binding an object's features and/or parts into a coherent representation of shape: Dynamically bound (e.g., by synchrony of firing) representations of part attributes and spatial relations form a structural description of an object's shape, while units representing shape attributes at specific locations (i.e., a static binding of attributes to locations) form an analogue (image-like) representation of that shape. I will present a computational model of object recognition based on this proposal and empirical tests of the model. The model accounts for a large body of findings in human object recognition, and makes several novel and counter intuitive predictions. In brief, it predicts that visual priming for attended objects will be invariant with translation, scale, and left-right reflection, whereas priming for unattended objects will be invariant with translation and scale, but sensitive to left-right reflection. Five experiments demonstrated the predicted relationships between visual attention and patterns of visual priming as a function of variations in viewpoint. The implications of these findings for theories of visual binding and shape perception will be discussed.
ASJC Scopus subject areas
- Experimental and Cognitive Psychology
- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience