TY - JOUR
T1 - Listeners use speaker identity to access representations of spatial perspective during online language comprehension
AU - Ryskin, Rachel A.
AU - Wang, Ranxiao Frances
AU - Brown-Schmidt, Sarah
N1 - This material is based upon work supported by the National Science Foundation under Grant No. NSF 12-57029 to S. Brown-Schmidt. Thanks to Ariel N. James and Daniel H. Katz for recording auditory stimuli and Phoebe Bauer for help with collecting data.
PY - 2016/2/1
Y1 - 2016/2/1
N2 - Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.
AB - Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.
KW - Eye-tracking
KW - Language comprehension
KW - Partner-specific encoding
KW - Spatial perspective-taking
UR - http://www.scopus.com/inward/record.url?scp=84948784111&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84948784111&partnerID=8YFLogxK
U2 - 10.1016/j.cognition.2015.11.011
DO - 10.1016/j.cognition.2015.11.011
M3 - Article
C2 - 26638050
AN - SCOPUS:84948784111
SN - 0010-0277
VL - 147
SP - 75
EP - 84
JO - Cognition
JF - Cognition
ER -