TY - GEN
T1 - Ask To The Point
T2 - 2023 Findings of the Association for Computational Linguistics: EMNLP 2023
AU - Liu, Yuxiang
AU - Huang, Jie
AU - Chen-Chuan Chang, Kevin
N1 - This material is based upon work supported by the National Science Foundation IIS 16-19302 and IIS 16-33755, Zhejiang University ZJU Research 083650, IBM-Illinois Center for Cognitive Computing SystemsResearch (C3SR) and IBM-Illinois Discovery Accelerator Institute (IIDAI), grants from eBay and Microsoft Azure, UIUC OVCR CCIL Planning Grant 434S34, UIUC CSBS Small Grant 434C8U, and UIUC New Frontiers Initiative. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the funding agencies.
PY - 2023
Y1 - 2023
N2 - We introduce a new task called entity-centric question generation (ECQG), motivated by real-world applications such as topic-specific learning, assisted reading, and fact-checking. The task aims to generate questions from an entity perspective. To solve ECQG, we propose a coherent PLM-based framework GenCONE with two novel modules: content focusing and question verification. The content focusing module first identifies a focus as “what to ask” to form draft questions, and the question verification module refines the questions afterwards by verifying the answerability. We also construct a large-scale open-domain dataset from SQuAD to support this task. Our extensive experiments demonstrate that GenCONE significantly and consistently outperforms various baselines, and two modules are effective and complementary in generating high-quality questions.
AB - We introduce a new task called entity-centric question generation (ECQG), motivated by real-world applications such as topic-specific learning, assisted reading, and fact-checking. The task aims to generate questions from an entity perspective. To solve ECQG, we propose a coherent PLM-based framework GenCONE with two novel modules: content focusing and question verification. The content focusing module first identifies a focus as “what to ask” to form draft questions, and the question verification module refines the questions afterwards by verifying the answerability. We also construct a large-scale open-domain dataset from SQuAD to support this task. Our extensive experiments demonstrate that GenCONE significantly and consistently outperforms various baselines, and two modules are effective and complementary in generating high-quality questions.
UR - http://www.scopus.com/inward/record.url?scp=85183290305&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85183290305&partnerID=8YFLogxK
U2 - 10.18653/v1/2023.findings-emnlp.178
DO - 10.18653/v1/2023.findings-emnlp.178
M3 - Conference contribution
AN - SCOPUS:85183290305
T3 - Findings of the Association for Computational Linguistics: EMNLP 2023
SP - 2703
EP - 2716
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 6 December 2023 through 10 December 2023
ER -