Abstract
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants’ perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.
Original language | English (US) |
---|---|
Pages (from-to) | 121-130.e6 |
Journal | Current Biology |
Volume | 35 |
Issue number | 1 |
DOIs | |
State | Published - Jan 6 2025 |
Externally published | Yes |
Keywords
- naturalistic
- panoramic memory
- prediction
- scene memory
- scene perception
- virtual reality
- visual action
ASJC Scopus subject areas
- General Biochemistry, Genetics and Molecular Biology
- General Agricultural and Biological Sciences