Learning in natural language

Dan Roth

Research output: Contribution to journalConference articlepeer-review

Abstract

Statistics-based classifiers in natural language are developed typically by assuming a generative model for the data, estimating its parameters from training data and then using Bayes rule to obtain a classifier. For many problems the assumptions made by the generative models are evidently wrong, leaving open the question of why these approaches work. This paper presents a learning theory account of the major statistical approaches to learning in natural language. A class of Linear Statistical Queries (LSQ) hypotheses is defined and learning with it is shown to exhibit some robustness properties. Many statistical learners used in natural language, including naive Bayes, Markov Models and Maximum Entropy models are shown to be LSQ hypotheses, explaining the robustness of these predictors even when the underlying probabilistic assumptions do not hold. This coherent view of when and why learning approaches work in this context may help to develop better learning methods and an understanding of the role of learning in natural language inferences.

Original languageEnglish (US)
Pages (from-to)898-904
Number of pages7
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2
StatePublished - 1999
Externally publishedYes
Event16th International Joint Conference on Artificial Intelligence, IJCAI 1999 - Stockholm, Sweden
Duration: Jul 31 1999Aug 6 1999

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning in natural language'. Together they form a unique fingerprint.

Cite this