Listeners use speaker identity to access representations of spatial perspective during online language comprehension

Rachel A. Ryskin, Ranxiao Frances Wang, Sarah Brown-Schmidt

Research output: Contribution to journalArticle


Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.

Original languageEnglish (US)
Pages (from-to)75-84
Number of pages10
StatePublished - Feb 1 2016



  • Eye-tracking
  • Language comprehension
  • Partner-specific encoding
  • Spatial perspective-taking

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Language and Linguistics
  • Developmental and Educational Psychology
  • Linguistics and Language
  • Cognitive Neuroscience

Cite this