TY - GEN
T1 - EA2E
T2 - 2022 Findings of the Association for Computational Linguistics: NAACL 2022
AU - Zeng, Qi
AU - Zhan, Qiusi
AU - Ji, Heng
N1 - Publisher Copyright:
© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.
PY - 2022
Y1 - 2022
N2 - Events are inter-related in documents. Motivated by the one-sense-per-discourse theory, we hypothesize that a participant tends to play consistent roles across multiple events in the same document. However recent work on document-level event argument extraction models each individual event in isolation and therefore causes inconsistency among extracted arguments across events, which will further cause discrepancy for downstream applications such as event knowledge base population, question answering, and hypothesis generation. In this work, we formulate event argument consistency as the constraints from event-event relations under the document-level setting. To improve consistency we introduce the Event-Aware Argument Extraction (EA2E) model with augmented context for training and inference. Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA2E compared to baseline methods.
AB - Events are inter-related in documents. Motivated by the one-sense-per-discourse theory, we hypothesize that a participant tends to play consistent roles across multiple events in the same document. However recent work on document-level event argument extraction models each individual event in isolation and therefore causes inconsistency among extracted arguments across events, which will further cause discrepancy for downstream applications such as event knowledge base population, question answering, and hypothesis generation. In this work, we formulate event argument consistency as the constraints from event-event relations under the document-level setting. To improve consistency we introduce the Event-Aware Argument Extraction (EA2E) model with augmented context for training and inference. Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA2E compared to baseline methods.
UR - http://www.scopus.com/inward/record.url?scp=85137380022&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137380022&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.findings-naacl.202
DO - 10.18653/v1/2022.findings-naacl.202
M3 - Conference contribution
AN - SCOPUS:85137380022
T3 - Findings of the Association for Computational Linguistics: NAACL 2022 - Findings
SP - 2649
EP - 2655
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 10 July 2022 through 15 July 2022
ER -