Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem

Michael McCloskey, Neal J. Cohen

Research output: Contribution to journalArticle

Abstract

Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

Original languageEnglish (US)
Pages (from-to)109-165
Number of pages57
JournalPsychology of Learning and Motivation - Advances in Research and Theory
Volume24
Issue numberC
DOIs
StatePublished - Jan 1 1989

ASJC Scopus subject areas

  • Social Psychology
  • Developmental and Educational Psychology

Fingerprint Dive into the research topics of 'Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem'. Together they form a unique fingerprint.

  • Cite this