Minimally supervised model of early language acquisition

Michael Connor, Yael Gertner, Cynthia L Fisher, Dan Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified "Baby" Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.

Original languageEnglish (US)
Title of host publicationCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning
Pages84-92
Number of pages9
StatePublished - Dec 1 2009
Event13th Conference on Computational Natural Language Learning, CoNLL 2009 - Boulder, CO, United States
Duration: Jun 4 2009Jun 5 2009

Publication series

NameCoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning

Other

Other13th Conference on Computational Natural Language Learning, CoNLL 2009
CountryUnited States
CityBoulder, CO
Period6/4/096/5/09

Fingerprint

language acquisition
Labeling
Gold
Semantics
Feedback
gold standard
Experiments
baby
learning
semantics
experiment
knowledge

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Linguistics and Language

Cite this

Connor, M., Gertner, Y., Fisher, C. L., & Roth, D. (2009). Minimally supervised model of early language acquisition. In CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning (pp. 84-92). (CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning).

Minimally supervised model of early language acquisition. / Connor, Michael; Gertner, Yael; Fisher, Cynthia L; Roth, Dan.

CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning. 2009. p. 84-92 (CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Connor, M, Gertner, Y, Fisher, CL & Roth, D 2009, Minimally supervised model of early language acquisition. in CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning. CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pp. 84-92, 13th Conference on Computational Natural Language Learning, CoNLL 2009, Boulder, CO, United States, 6/4/09.
Connor M, Gertner Y, Fisher CL, Roth D. Minimally supervised model of early language acquisition. In CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning. 2009. p. 84-92. (CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning).
Connor, Michael ; Gertner, Yael ; Fisher, Cynthia L ; Roth, Dan. / Minimally supervised model of early language acquisition. CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning. 2009. pp. 84-92 (CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning).
@inproceedings{5367f9c906454b4aaafb64d66bca09e9,
title = "Minimally supervised model of early language acquisition",
abstract = "Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified {"}Baby{"} Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.",
author = "Michael Connor and Yael Gertner and Fisher, {Cynthia L} and Dan Roth",
year = "2009",
month = "12",
day = "1",
language = "English (US)",
isbn = "1932432299",
series = "CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning",
pages = "84--92",
booktitle = "CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning",

}

TY - GEN

T1 - Minimally supervised model of early language acquisition

AU - Connor, Michael

AU - Gertner, Yael

AU - Fisher, Cynthia L

AU - Roth, Dan

PY - 2009/12/1

Y1 - 2009/12/1

N2 - Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified "Baby" Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.

AB - Theories of human language acquisition assume that learning to understand sentences is a partially-supervised task (at best). Instead of using 'gold-standard' feedback, we train a simplified "Baby" Semantic Role Labeling system by combining world knowledge and simple grammatical constraints to form a potentially noisy training signal. This combination of knowledge sources is vital for learning; a training signal derived from a single component leads the learner astray. When this largely unsupervised training approach is applied to a corpus of child directed speech, the BabySRL learns shallow structural cues that allow it to mimic striking behaviors found in experiments with children and begin to correctly identify agents in a sentence.

UR - http://www.scopus.com/inward/record.url?scp=84862283547&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862283547&partnerID=8YFLogxK

M3 - Conference contribution

SN - 1932432299

SN - 9781932432299

T3 - CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning

SP - 84

EP - 92

BT - CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning

ER -