TY - GEN
T1 - Q-Diffusion: Quantizing Diffusion Models
AU - Li, Xiuyu
AU - Liu, Yijiang
AU - Lian, Long
AU - Yang, Huanrui
AU - Dong, Zhen
AU - Kang, Daniel
AU - Zhang, Shanghang
AU - Keutzer, Kurt
PY - 2023/10
Y1 - 2023/10
N2 - Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.
AB - Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.
UR - http://www.scopus.com/inward/record.url?scp=85178113191&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85178113191&partnerID=8YFLogxK
U2 - 10.1109/ICCV51070.2023.01608
DO - 10.1109/ICCV51070.2023.01608
M3 - Conference contribution
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 17489
EP - 17499
BT - Proceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
ER -