TY - GEN
T1 - Invited Paper
T2 - 29th Asia and South Pacific Design Automation Conference, ASP-DAC 2024
AU - Wan, Lily Jiaxin
AU - Huang, Yingbing
AU - Li, Yuhong
AU - Ye, Hanchen
AU - Wang, Jinghua
AU - Zhang, Xiaofan
AU - Chen, Deming
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The widespread adoption of Large Language Models (LLMs) is impeded by their demanding compute and memory resources. The first task of this paper is to explore optimization strategies to expedite LLMs, including quantization, pruning, and operation-level optimizations. One unique direction is to optimize LLM inference through novel software/hardware co-design methods. Given the accelerated LLMs, the second task of this paper is to study LLMs' performance in the usage scenario of circuit design and verification. Specifically, we place a particular emphasis on functional verification. Through automated prompt engineering, we harness the capabilities of the established LLM, GPT-4, to generate High-Level Synthesis (HLS) designs with predefined errors based on 11 open-source synthesizable HLS benchmark suites. This dataset is a comprehensive collection of over 1000 function-level designs, and each of which is afflicted with up to 45 distinct combinations of defects injected into the source code. This dataset, named Chrysalis, expands upon what's available in current HLS error models, offering a rich resource for training to improve how LLMs debug code. The dataset can be accessed at: https://github.com/UIUC-ChenLab/Chrysalis-HLS.
AB - The widespread adoption of Large Language Models (LLMs) is impeded by their demanding compute and memory resources. The first task of this paper is to explore optimization strategies to expedite LLMs, including quantization, pruning, and operation-level optimizations. One unique direction is to optimize LLM inference through novel software/hardware co-design methods. Given the accelerated LLMs, the second task of this paper is to study LLMs' performance in the usage scenario of circuit design and verification. Specifically, we place a particular emphasis on functional verification. Through automated prompt engineering, we harness the capabilities of the established LLM, GPT-4, to generate High-Level Synthesis (HLS) designs with predefined errors based on 11 open-source synthesizable HLS benchmark suites. This dataset is a comprehensive collection of over 1000 function-level designs, and each of which is afflicted with up to 45 distinct combinations of defects injected into the source code. This dataset, named Chrysalis, expands upon what's available in current HLS error models, offering a rich resource for training to improve how LLMs debug code. The dataset can be accessed at: https://github.com/UIUC-ChenLab/Chrysalis-HLS.
KW - Large Language Models
KW - functional verification
KW - software/hardware co-design
UR - http://www.scopus.com/inward/record.url?scp=85189337069&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85189337069&partnerID=8YFLogxK
U2 - 10.1109/ASP-DAC58780.2024.10473893
DO - 10.1109/ASP-DAC58780.2024.10473893
M3 - Conference contribution
AN - SCOPUS:85189337069
T3 - Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC
SP - 435
EP - 441
BT - ASP-DAC 2024 - 29th Asia and South Pacific Design Automation Conference, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 January 2024 through 25 January 2024
ER -