Abstract
This paper compares a number of generative probability models for a widecoverage Combinatory Categorial Grammar (CCG) parser. These models are trained and tested on a corpus obtained by translating the Penn Treebank trees into CCG normal-form derivations. According to an evaluation of unlabeled word-word dependencies, our best model achieves a performance of 89.9%, comparable to the figures given by Collins (1999) for a linguistically less expressive grammar. In contrast to Gildea (2001), we find a significant improvement from modeling wordword dependencies.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 335-342 |
| Number of pages | 8 |
| Journal | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
| Volume | 2002-July |
| State | Published - 2002 |
| Externally published | Yes |
| Event | 40th Annual Meeting of the Association for Computational Linguistics, ACL 2002 - Philadelphia, United States Duration: Jul 7 2002 → Jul 12 2002 |
ASJC Scopus subject areas
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
Fingerprint
Dive into the research topics of 'Generative models for statistical parsing with combinatory categorial grammar'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS