Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem

Michael McCloskey, Neal J Cohen

Research output: Contribution to journalArticle

Abstract

Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

Original languageEnglish (US)
Pages (from-to)109-165
Number of pages57
JournalPsychology of Learning and Motivation - Advances in Research and Theory
Volume24
Issue numberC
DOIs
StatePublished - Jan 1 1989
Externally publishedYes

Fingerprint

Learning
Weights and Measures
Cognitive Science
Cognition

ASJC Scopus subject areas

  • Social Psychology
  • Developmental and Educational Psychology

Cite this

Catastrophic Interference in Connectionist Networks : The Sequential Learning Problem. / McCloskey, Michael; Cohen, Neal J.

In: Psychology of Learning and Motivation - Advances in Research and Theory, Vol. 24, No. C, 01.01.1989, p. 109-165.

Research output: Contribution to journalArticle

@article{ffe0793b43f842d2a50467d736a80c83,
title = "Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem",
abstract = "Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.",
author = "Michael McCloskey and Cohen, {Neal J}",
year = "1989",
month = "1",
day = "1",
doi = "10.1016/S0079-7421(08)60536-8",
language = "English (US)",
volume = "24",
pages = "109--165",
journal = "Psychology of Learning and Motivation - Advances in Research and Theory",
issn = "0079-7421",
publisher = "Academic Press Inc.",
number = "C",

}

TY - JOUR

T1 - Catastrophic Interference in Connectionist Networks

T2 - The Sequential Learning Problem

AU - McCloskey, Michael

AU - Cohen, Neal J

PY - 1989/1/1

Y1 - 1989/1/1

N2 - Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

AB - Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

UR - http://www.scopus.com/inward/record.url?scp=77957064197&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77957064197&partnerID=8YFLogxK

U2 - 10.1016/S0079-7421(08)60536-8

DO - 10.1016/S0079-7421(08)60536-8

M3 - Article

AN - SCOPUS:77957064197

VL - 24

SP - 109

EP - 165

JO - Psychology of Learning and Motivation - Advances in Research and Theory

JF - Psychology of Learning and Motivation - Advances in Research and Theory

SN - 0079-7421

IS - C

ER -