Multimodality and Language Learning

Research output: Chapter in Book/Report/Conference proceedingChapter


The term multimodality refers to the combination of multiple sensory and communicative modes, such as sight, sound, print, images, video, music, and so on, that produce meaning in any given message. In a sense, all communication is multimodal in that even in pre‐digital times meaning was produced not solely through writing but through choice of font, illustrations, page design, and so on, and in spoken communications through both linguistic and paralinguistic means. In the digital age, multimodality has become even more central to communication, and this is especially true for language learners, who depend on the multiplicity of channels available on a screen to help them “pick up” meaning in a target language. The question this chapter addresses is, how does this happen? Is it that the different modes function redundantly as their own type of “language,” or is it that the different modes contribute through the coordination of different types of signals governed by different principles of signification? In this chapter I argue for the latter explanation over the former, and in conclusion propose four principles of multimodality and informal language learning based on the work of C. S. Peirce and Paul Grice.
Original languageEnglish (US)
Title of host publicationThe Handbook of Informal Language Learning
EditorsMark Dressman, Randall William Sadler
ISBN (Electronic)9781119472384
StatePublished - Feb 2020


  • multimodality
  • semiotics
  • icon
  • index
  • symbol
  • pragmatism
  • cooperative principle


Dive into the research topics of 'Multimodality and Language Learning'. Together they form a unique fingerprint.

Cite this