TY - GEN
T1 - Learning nonadjacent dependencies in thought, language, and action
T2 - 35th Annual Meeting of the Cognitive Science Society - Cooperative Minds: Social Interaction and Group Dynamics, CogSci 2013
AU - Willits, Jon A.
N1 - Funding Information:
This work was funded by NICDC F31-DC00936-02 to Jon Willits. The work received much useful input from Mark Seidenberg, Jenny Saffran, Maryellen MacDonald, and Timothy Rogers, and Jessica Montag.
Publisher Copyright:
© CogSci 2013.All rights reserved.
PY - 2013
Y1 - 2013
N2 - Learning to represent hierarchical structure and its nonadjacent dependencies (NDs) is thought to be difficult. I present three simulations of ND learning using a simple recurrent network (SRN). In Simulation 1, I show that the model can learn distance-invariant representations of nonadjacent dependencies. In Simulation 2, I show that purely localist SRNs can learn abstract rule-like relationships. In Simulation 3, I show that SRNs exhibit facilitated learning when there are correlated perceptual and semantic cues to the structure (just as people do). Together, these simulations show that (contrary to previous claims) SRNs are capable of learning abstract and rule-like nonadjacent dependencies, and show critical perceptual- and semantics-syntax interactions during learning. The studies refute the claim that neural networks and other associative models are fundamentally incapable of representing hierarchical structure, and show how recurrent networks can provide insight about principles underlying human learning and the representation of hierarchical structure.
AB - Learning to represent hierarchical structure and its nonadjacent dependencies (NDs) is thought to be difficult. I present three simulations of ND learning using a simple recurrent network (SRN). In Simulation 1, I show that the model can learn distance-invariant representations of nonadjacent dependencies. In Simulation 2, I show that purely localist SRNs can learn abstract rule-like relationships. In Simulation 3, I show that SRNs exhibit facilitated learning when there are correlated perceptual and semantic cues to the structure (just as people do). Together, these simulations show that (contrary to previous claims) SRNs are capable of learning abstract and rule-like nonadjacent dependencies, and show critical perceptual- and semantics-syntax interactions during learning. The studies refute the claim that neural networks and other associative models are fundamentally incapable of representing hierarchical structure, and show how recurrent networks can provide insight about principles underlying human learning and the representation of hierarchical structure.
KW - hierarchical structure
KW - nonadjacent dependencies
KW - recurrent connectionist networks
UR - http://www.scopus.com/inward/record.url?scp=85139464309&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139464309&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85139464309
T3 - Cooperative Minds: Social Interaction and Group Dynamics - Proceedings of the 35th Annual Meeting of the Cognitive Science Society, CogSci 2013
SP - 1605
EP - 1610
BT - Cooperative Minds
A2 - Knauff, Markus
A2 - Sebanz, Natalie
A2 - Pauen, Michael
A2 - Wachsmuth, Ipke
PB - The Cognitive Science Society
Y2 - 31 July 2013 through 3 August 2013
ER -