Minimally supervised model of early language acquisition

Michael Connor, Yael Gertner, Cynthia Fisher, Dan Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified "Baby" Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.

Original languageEnglish (US)
Title of host publicationCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning
Pages84-92
Number of pages9
StatePublished - Dec 1 2009
Event13th Conference on Computational Natural Language Learning, CoNLL 2009 - Boulder, CO, United States
Duration: Jun 4 2009Jun 5 2009

Publication series

NameCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning

Other

Other13th Conference on Computational Natural Language Learning, CoNLL 2009
CountryUnited States
CityBoulder, CO
Period6/4/096/5/09

    Fingerprint

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Linguistics and Language

Cite this

Connor, M., Gertner, Y., Fisher, C., & Roth, D. (2009). Minimally supervised model of early language acquisition. In CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning (pp. 84-92). (CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning).