Abstract
Relation extraction is the task of finding semantic relations between entities from text. The state-of-the-art methods for relation extraction are mostly based on statistical learning, and thus all have to deal with feature selection, which can significantly affect the classification performance. In this paper, we systematically explore a large space of features for relation extraction and evaluate the effectiveness of different feature subspaces. We present a general definition of feature spaces based on a graphic representation of relation instances, and explore three different representations of relation instances and features of different complexities within this framework. Our experiments show that using only basic unit features is generally sufficient to achieve state-of-the-art performance, while over-inclusion of complex features may hurt the performance. A combination of features of different levels of complexity and from different sentence representations, coupled with task-oriented feature pruning, gives the best performance.
Original language | English (US) |
---|---|
Pages | 113-120 |
Number of pages | 8 |
State | Published - 2007 |
Event | Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, NAACL HLT 2007 - Rochester, NY, United States Duration: Apr 22 2007 → Apr 27 2007 |
Other
Other | Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, NAACL HLT 2007 |
---|---|
Country/Territory | United States |
City | Rochester, NY |
Period | 4/22/07 → 4/27/07 |
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language