TY - GEN
T1 - A mixture of h - 1 heads is better than h heads
AU - Peng, Hao
AU - Schwartz, Roy
AU - Li, Dianqi
AU - Smith, Noah A.
N1 - Funding Information:
We thank the anonymous reviewers, Yoav Artzi, Mandar Joshi, Jungo Kasai, Lingpeng Kong, Ken-ton Lee, Kelvin Luu, Will Merrill, Phoebe Mul-caire, Mark Neumann, Nikos Pappas, Ofir Press, Lianhui Qin, Swabha Swayamdipta, Vivek Sriku-mar, Sam Thomson, and Dani Yogatama for their helpful feedback. This work was supported in part by NSF grant 1562364, a Google Fellowship, and NVIDIA Corporation through the donation of a Tesla GPU.
Publisher Copyright:
© 2020 Association for Computational Linguistics
PY - 2020
Y1 - 2020
N2 - Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention heads can be pruned without significant performance loss. In this work, we instead “reallocate” them-the model learns to activate different heads on different inputs. Drawing connections between multi-head attention and mixture of experts, we propose the mixture of attentive experts model (MAE). MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters. Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks. Particularly, on the WMT14 English to German translation dataset, MAE improves over “transformer-base” by 0.8 BLEU, with a comparable number of parameters. Our analysis shows that our model learns to specialize different experts to different inputs.
AB - Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention heads can be pruned without significant performance loss. In this work, we instead “reallocate” them-the model learns to activate different heads on different inputs. Drawing connections between multi-head attention and mixture of experts, we propose the mixture of attentive experts model (MAE). MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters. Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks. Particularly, on the WMT14 English to German translation dataset, MAE improves over “transformer-base” by 0.8 BLEU, with a comparable number of parameters. Our analysis shows that our model learns to specialize different experts to different inputs.
UR - https://www.scopus.com/pages/publications/85102666095
UR - https://www.scopus.com/pages/publications/85102666095#tab=citedBy
U2 - 10.18653/v1/2020.acl-main.587
DO - 10.18653/v1/2020.acl-main.587
M3 - Conference contribution
AN - SCOPUS:85102666095
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 6566
EP - 6577
BT - ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
Y2 - 5 July 2020 through 10 July 2020
ER -