Psychoacoustic Calibration of Loss Functions for Efficient End-to-End Neural Audio Coding

Kai Zhen, Mi Suk Lee, Jongmo Sung, Seungkwon Beack, Minje Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Conventional audio coding technologies commonly leverage human perception of sound, or psychoacoustics, to reduce the bitrate while preserving the perceptual quality of the decoded audio signals. For neural audio codecs, however, the objective nature of the loss function usually leads to suboptimal sound quality as well as high run-time complexity due to the large model size. In this work, we present a psychoacoustic calibration scheme to re-define the loss functions of neural audio coding systems so that it can decode signals more perceptually similar to the reference, yet with a much lower model complexity. The proposed loss function incorporates the global masking threshold, allowing the reconstruction error that corresponds to inaudible artifacts. Experimental results show that the proposed model outperforms the baseline neural codec twice as large and consuming 23.4% more bits per second. With the proposed method, a lightweight neural codec, with only 0.9 million parameters, performs near-transparent audio coding comparable with the commercial MPEG-1 Audio Layer III codec at 112 kbps.

Original languageEnglish (US)
Article number9265269
Pages (from-to)2159-2163
Number of pages5
JournalIEEE Signal Processing Letters
Volume27
DOIs
StatePublished - 2020
Externally publishedYes

Keywords

  • Audio coding
  • deep neural networks
  • network compression
  • psychoacoustics

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Psychoacoustic Calibration of Loss Functions for Efficient End-to-End Neural Audio Coding'. Together they form a unique fingerprint.

Cite this