TY - GEN
T1 - Style Control for Schema-Guided Natural Language Generation
AU - Tsai, Alicia Y.
AU - Oraby, Shereen
AU - Perera, Vittorio
AU - Kao, Jiun Yu
AU - Du, Yuheng
AU - Narayan-Chen, Anjali
AU - Chung, Tagyoung
AU - Hakkani-Tur, Dilek
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - Natural Language Generation (NLG) for task-oriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, point-of-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control. We experiment in detail with various controlled generation methods for large pretrained language models: specifically, conditional training, guided fine-tuning, and guided decoding. We discuss their advantages and limitations, and evaluate them with a broad range of automatic and human evaluation metrics. Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexically-defined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminator-based guided decoding methods. The results also suggest that methods that are more scalable (with less hyper-parameters tuning) and that disentangle content generation and stylistic variations are more effective at achieving semantic correctness and style accuracy.
AB - Natural Language Generation (NLG) for task-oriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, point-of-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control. We experiment in detail with various controlled generation methods for large pretrained language models: specifically, conditional training, guided fine-tuning, and guided decoding. We discuss their advantages and limitations, and evaluate them with a broad range of automatic and human evaluation metrics. Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexically-defined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminator-based guided decoding methods. The results also suggest that methods that are more scalable (with less hyper-parameters tuning) and that disentangle content generation and stylistic variations are more effective at achieving semantic correctness and style accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85138001333&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138001333&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85138001333
T3 - NLP for Conversational AI, NLP4ConvAI 2021 - Proceedings of the 3rd Workshop
SP - 228
EP - 242
BT - NLP for Conversational AI, NLP4ConvAI 2021 - Proceedings of the 3rd Workshop
A2 - Papangelis, Alexandros
A2 - Budzianowski, Pawel
A2 - Liu, Bing
A2 - Nouri, Elnaz
A2 - Rastogi, Abhinav
A2 - Chen, Yun-Nung
PB - Association for Computational Linguistics (ACL)
T2 - 3rd Workshop on Natural Language Processing for Conversational AI, NLP4ConvAI 2021
Y2 - 10 November 2021
ER -