Minimally supervised model of early language acquisition

Michael Connor, Yael Gertner, Cynthia Fisher, Dan Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified "Baby" Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.

Original languageEnglish (US)
Title of host publicationCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning
PublisherAssociation for Computational Linguistics (ACL)
Pages84-92
Number of pages9
ISBN (Print)1932432299, 9781932432299
DOIs
StatePublished - 2009
Event13th Conference on Computational Natural Language Learning, CoNLL 2009 - Boulder, CO, United States
Duration: Jun 4 2009Jun 5 2009

Publication series

NameCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning

Other

Other13th Conference on Computational Natural Language Learning, CoNLL 2009
Country/TerritoryUnited States
CityBoulder, CO
Period6/4/096/5/09

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Language and Linguistics
  • Computational Theory and Mathematics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Minimally supervised model of early language acquisition'. Together they form a unique fingerprint.

Cite this