TY - JOUR
T1 - Compressing large-scale transformer-based models
T2 - A case study on bert
AU - Ganesh, Prakhar
AU - Chen, Yao
AU - Lou, Xin
AU - Khan, Mohammad Ali
AU - Yang, Yin
AU - Sajjad, Hassan
AU - Nakov, Preslav
AU - Chen, Deming
AU - Winslett, Marianne
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021/9/21
Y1 - 2021/9/21
N2 - Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource-hungry and computation-intensive to suit low-capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
AB - Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource-hungry and computation-intensive to suit low-capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
UR - http://www.scopus.com/inward/record.url?scp=85119483414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119483414&partnerID=8YFLogxK
U2 - 10.1162/tacl_a_00413
DO - 10.1162/tacl_a_00413
M3 - Article
AN - SCOPUS:85119483414
SN - 2307-387X
VL - 9
SP - 1061
EP - 1080
JO - Transactions of the Association for Computational Linguistics
JF - Transactions of the Association for Computational Linguistics
ER -