Generative models for statistical parsing with combinatory categorial grammar

Julia Hockenmaier, Mark Steedman

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper compares a number of generative probability models for a widecoverage Combinatory Categorial Grammar (CCG) parser. These models are trained and tested on a corpus obtained by translating the Penn Treebank trees into CCG normal-form derivations. According to an evaluation of unlabeled word-word dependencies, our best model achieves a performance of 89.9%, comparable to the figures given by Collins (1999) for a linguistically less expressive grammar. In contrast to Gildea (2001), we find a significant improvement from modeling wordword dependencies.

Original languageEnglish (US)
Pages (from-to)335-342
Number of pages8
JournalProceedings of the Annual Meeting of the Association for Computational Linguistics
Volume2002-July
StatePublished - 2002
Externally publishedYes
Event40th Annual Meeting of the Association for Computational Linguistics, ACL 2002 - Philadelphia, United States
Duration: Jul 7 2002Jul 12 2002

ASJC Scopus subject areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Generative models for statistical parsing with combinatory categorial grammar'. Together they form a unique fingerprint.

Cite this