Predicting search performance in heterogeneous visual search scenes with real-world objects

Research output: Contribution to journalArticle

Abstract

Previous work in our lab has demonstrated that efficient visual search with a fixed target has a reaction time by set size function that is best characterized by logarithmic curves. Further, the steepness of these logarithmic curves is determined by the similarity between target and distractor items (Buetti et al., 2016). A theoretical account of these findings was proposed, namely that a parallel, unlimited capacity, exhaustive processing architecture is underlying such data. Here, we conducted two experiments to expand these findings to a set of real-world stimuli, in both homogeneous and heterogeneous search displays. We used computational simulations of this architecture to identify a way to predict RT performance in heterogeneous search using parameters estimated from homogeneous search data. Further, by examining the systematic deviation from our predictions in the observed data, we found evidence that early visual processing for individual items is not independent. Instead, items in homogeneous displays seemed to facilitate each other’s processing by a multiplicative factor. These results challenge previous accounts of heterogeneity effects in visual search, and demonstrate the explanatory and predictive power of an approach that combines computational simulations and behavioral data to better understand performance in visual search.

Original languageEnglish (US)
Article number6
JournalCollabra: Psychology
Volume3
Issue number1
DOIs
StatePublished - Jan 1 2017

Keywords

  • Computational modeling
  • Heterogeneity
  • Parallel processing
  • Real-world objects
  • Visual attention
  • Visual search

ASJC Scopus subject areas

  • Psychology(all)

Cite this