TY - CONF
T1 - PALT
T2 - 2022 Findings of the Association for Computational Linguistics: EMNLP 2022
AU - Shen, Jianhao
AU - Wang, Chenguang
AU - Yuan, Ye
AU - Han, Jiawei
AU - Ji, Heng
AU - Sen, Koushik
AU - Zhang, Ming
AU - Song, Dawn
N1 - We would like to thank the anonymous reviewers for their suggestions and comments. This paper is partially supported by National Key Research and Development Program of China with Grant No. 2018AAA0101902 and the National Natural Science Foundation of China (NSFC Grant Numbers 62106008 and 62276002). This material is in part based upon work supported by Berkeley DeepDrive and Berkeley Artificial Intelligence Research. The research was also supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532.
PY - 2022
Y1 - 2022
N2 - This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a “fill-in-the-blank” task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters.
AB - This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a “fill-in-the-blank” task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters.
UR - http://www.scopus.com/inward/record.url?scp=85149889015&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149889015&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85149889015
SP - 3862
EP - 3876
Y2 - 7 December 2022 through 11 December 2022
ER -