TY - GEN
T1 - Empowering Parameter-Efficient Transfer Learning by Recognizing the Kernel Structure in Attention
AU - Chen, Yifan
AU - Hazarika, Devamanyu
AU - Namazifar, Mahdi
AU - Liu, Yang
AU - Jin, Di
AU - Hakkani-Tur, Dilek
N1 - Publisher Copyright:
© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.
PY - 2022
Y1 - 2022
N2 - The massive amount of trainable parameters in the pre-trained language models (PLMs) makes them hard to be deployed to multiple downstream tasks. To address this issue, parameter-efficient transfer learning methods have been proposed to tune only a few parameters during fine-tuning while freezing the rest. This paper looks at existing methods along this line through the kernel lens. Motivated by the connection between self-attention in transformer-based PLMs and kernel learning, we propose kernel-wise adapters, namely Kernel-mix, that utilize the kernel structure in self-attention to guide the assignment of the tunable parameters. These adapters use guidelines found in classical kernel learning and enable separate parameter tuning for each attention head. Our empirical results, over a diverse set of natural language generation and understanding tasks, show that our proposed adapters can attain or improve the strong performance of existing baselines.
AB - The massive amount of trainable parameters in the pre-trained language models (PLMs) makes them hard to be deployed to multiple downstream tasks. To address this issue, parameter-efficient transfer learning methods have been proposed to tune only a few parameters during fine-tuning while freezing the rest. This paper looks at existing methods along this line through the kernel lens. Motivated by the connection between self-attention in transformer-based PLMs and kernel learning, we propose kernel-wise adapters, namely Kernel-mix, that utilize the kernel structure in self-attention to guide the assignment of the tunable parameters. These adapters use guidelines found in classical kernel learning and enable separate parameter tuning for each attention head. Our empirical results, over a diverse set of natural language generation and understanding tasks, show that our proposed adapters can attain or improve the strong performance of existing baselines.
UR - http://www.scopus.com/inward/record.url?scp=85137328053&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137328053&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.findings-naacl.102
DO - 10.18653/v1/2022.findings-naacl.102
M3 - Conference contribution
AN - SCOPUS:85137328053
T3 - Findings of the Association for Computational Linguistics: NAACL 2022 - Findings
SP - 1375
EP - 1388
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 2022 Findings of the Association for Computational Linguistics: NAACL 2022
Y2 - 10 July 2022 through 15 July 2022
ER -