TY - CONF
T1 - Twist Decoding
T2 - 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
AU - Kasai, Jungo
AU - Sakaguchi, Keisuke
AU - Le Bras, Ronan
AU - Peng, Hao
AU - Lu, Ximing
AU - Radev, Dragomir
AU - Choi, Yejin
AU - Smith, Noah A.
N1 - This work was done while Keisuke Sakaguchi was at the Allen Institute for AI and Hao Peng was at the University of Washington. We thank Hila Gonen, Phillip Keung, the ARK group at the UW, and the Mosaic team at the Allen Institute for AI for their helpful feedback on this work. This work was supported in part by the DARPA MCS program through NIWC Pacific (N66001-19-2-4031) and Google Cloud Compute. Hao Peng was supported by a Google Ph.D. Fellowship.
We thank Hila Gonen, Phillip Keung, the ARK group at the UW, and the Mosaic team at the Allen Institute for AI for their helpful feedback on this work. This work was supported in part by the DARPA MCS program through NIWC Pacific (N66001-19-2-4031) and Google Cloud Compute. Hao Peng was supported by a Google Ph.D. Fellowship.
PY - 2022
Y1 - 2022
N2 - Many language generation models are now available for a wide range of generation tasks, including machine translation and summarization. Combining such diverse models may lead to further progress, but ensembling generation models is challenging during inference: conventional ensembling methods (e.g., shallow fusion) require that the models share vocabulary/tokenization schemes. We introduce TWIST decoding, a simple and general text generation algorithm that benefits from diverse models at inference time. Our method does not assume the vocabulary, tokenization or even generation order is shared. Our extensive evaluations on machine translation and scientific paper summarization demonstrate that TWIST decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available. TWIST decoding also consistently outperforms the popular reranking heuristic where output candidates from one model are rescored by another. We hope that our work will encourage researchers and practitioners to examine generation models collectively, not just independently, and to seek out models with complementary strengths to the currently available models.
AB - Many language generation models are now available for a wide range of generation tasks, including machine translation and summarization. Combining such diverse models may lead to further progress, but ensembling generation models is challenging during inference: conventional ensembling methods (e.g., shallow fusion) require that the models share vocabulary/tokenization schemes. We introduce TWIST decoding, a simple and general text generation algorithm that benefits from diverse models at inference time. Our method does not assume the vocabulary, tokenization or even generation order is shared. Our extensive evaluations on machine translation and scientific paper summarization demonstrate that TWIST decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available. TWIST decoding also consistently outperforms the popular reranking heuristic where output candidates from one model are rescored by another. We hope that our work will encourage researchers and practitioners to examine generation models collectively, not just independently, and to seek out models with complementary strengths to the currently available models.
UR - http://www.scopus.com/inward/record.url?scp=85149442469&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149442469&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.emnlp-main.326
DO - 10.18653/v1/2022.emnlp-main.326
M3 - Paper
AN - SCOPUS:85149442469
SP - 4909
EP - 4923
Y2 - 7 December 2022 through 11 December 2022
ER -