TY - GEN
T1 - Enhanced Knowledge Selection for Grounded Dialogues via Document Semantic Graphs
AU - Li, Sha
AU - Namazifar, Madhi
AU - Jin, Di
AU - Bansal, Mohit
AU - Ji, Heng
AU - Liu, Yang
AU - Hakkani-Tur, Dilek
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection among sentences in background document. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences. We apply multitask learning for sentence-level knowledge selection and concept-level knowledge selection jointly, and show that it improves sentence-level selection. Our experiments show that our semantic graph based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE (Moghe et al., 2018) and improves generalization on unseen topics in WoW (Dinan et al., 2019).
AB - Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection among sentences in background document. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences. We apply multitask learning for sentence-level knowledge selection and concept-level knowledge selection jointly, and show that it improves sentence-level selection. Our experiments show that our semantic graph based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE (Moghe et al., 2018) and improves generalization on unseen topics in WoW (Dinan et al., 2019).
UR - http://www.scopus.com/inward/record.url?scp=85138327856&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138327856&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85138327856
T3 - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
SP - 2810
EP - 2823
BT - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022
Y2 - 10 July 2022 through 15 July 2022
ER -