Human-object interactions are more than the sum of their parts

Christopher Baldassano, Diane M. Beck, Li Fei-Fei

Research output: Contribution to journalArticlepeer-review


Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human- object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sumof their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.

Original languageEnglish (US)
Pages (from-to)2276-2288
Number of pages13
JournalCerebral Cortex
Issue number3
StatePublished - Mar 1 2017


  • Action perception
  • Cross-decoding
  • FMRI
  • MVPA
  • Scene perception

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Cellular and Molecular Neuroscience


Dive into the research topics of 'Human-object interactions are more than the sum of their parts'. Together they form a unique fingerprint.

Cite this