Compilation and Optimizations for Efficient Machine Learning on Embedded Systems

Xiaofan Zhang, Yao Chen, Cong Hao, Sitao Huang, Yuhong Li, Deming Chen

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Deep Neural Networks (DNNs) have achieved great success in a variety of machine learning (ML) applications, delivering high-quality inferencing solutions in computer vision, natural language processing, virtual reality, etc. However, DNN-based ML applications also bring much increased computational and storage requirements, which are particularly challenging for embedded systems with limited compute/storage resources, tight power budgets, and small form factors. Challenges also come from the diverse application-specific requirements, including real-time responses, high-throughput performance, and reliable inference accuracy. To address these challenges, we introduce a series of effective design methodologies, including efficient ML model designs, customized hardware accelerator designs, and hardware/software co-design strategies to enable efficient ML applications on embedded systems.

Original languageEnglish (US)
Title of host publicationEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Subtitle of host publicationSoftware Optimizations and Hardware/Software Codesign
PublisherSpringer
Pages37-74
Number of pages38
ISBN (Electronic)9783031399329
ISBN (Print)9783031399312
DOIs
StatePublished - Jan 1 2023

Keywords

  • Compilation
  • Deep Neural Networks
  • Efficient ML model
  • Embedded systems
  • Hardware accelerator
  • Hardware/software co-design
  • Machine learning
  • Optimization

ASJC Scopus subject areas

  • General Computer Science
  • General Engineering
  • General Social Sciences

Fingerprint

Dive into the research topics of 'Compilation and Optimizations for Efficient Machine Learning on Embedded Systems'. Together they form a unique fingerprint.

Cite this