Abstract

This paper considers Deep Neural Network (DNN) linear-nonlinear computations implemented on memristor cross-bar substrates. To address the case where true memristor conductance values may differ from their target values, it introduces a theoretical framework that characterizes the effect of conductance value variations on the final inference computation. With only second-order moment assumptions, theoretical results on tracking the mean, variance, and covariance of the layer-by-layer noisy computations are given. By allowing the possibility of amplifying certain signals within the DNN, power consumption is characterized and then optimized via KKT conditions. Simulation results verify the accuracy of the proposed analysis and demonstrate the significant power efficiency gains that are possible via optimization for a target mean squared error.

Original languageEnglish (US)
Title of host publication2021 IEEE Information Theory Workshop, ITW 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665403122
DOIs
StatePublished - 2021
Event2021 IEEE Information Theory Workshop, ITW 2021 - Virtual, Online, Japan
Duration: Oct 17 2021Oct 21 2021

Publication series

Name2021 IEEE Information Theory Workshop, ITW 2021 - Proceedings

Conference

Conference2021 IEEE Information Theory Workshop, ITW 2021
Country/TerritoryJapan
CityVirtual, Online
Period10/17/2110/21/21

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Information Systems
  • Software

Fingerprint

Dive into the research topics of 'Power-Efficient Deep Neural Networks with Noisy Memristor Implementation'. Together they form a unique fingerprint.

Cite this