Communication algorithms via deep learning

Hyeji Kim, Yihan Jiang, Ranvir Rana, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Research output: Contribution to conferencePaper

Abstract

Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.

Original languageEnglish (US)
StatePublished - Jan 1 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: Apr 30 2018May 3 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
CountryCanada
CityVancouver
Period4/30/185/3/18

Fingerprint

Recurrent neural networks
Network architecture
neural network
communication
coding
Communication
learning
Turbo codes
Convolutional codes
Modems
Decoding
Signal to noise ratio
performance
Deep learning
Recurrent Neural Networks
White Noise
time
Signal-to-noise Ratio
Length
Deviation

ASJC Scopus subject areas

  • Language and Linguistics
  • Education
  • Computer Science Applications
  • Linguistics and Language

Cite this

Kim, H., Jiang, Y., Rana, R., Kannan, S., Oh, S., & Viswanath, P. (2018). Communication algorithms via deep learning. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.

Communication algorithms via deep learning. / Kim, Hyeji; Jiang, Yihan; Rana, Ranvir; Kannan, Sreeram; Oh, Sewoong; Viswanath, Pramod.

2018. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.

Research output: Contribution to conferencePaper

Kim, H, Jiang, Y, Rana, R, Kannan, S, Oh, S & Viswanath, P 2018, 'Communication algorithms via deep learning', Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada, 4/30/18 - 5/3/18.
Kim H, Jiang Y, Rana R, Kannan S, Oh S, Viswanath P. Communication algorithms via deep learning. 2018. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.
Kim, Hyeji ; Jiang, Yihan ; Rana, Ranvir ; Kannan, Sreeram ; Oh, Sewoong ; Viswanath, Pramod. / Communication algorithms via deep learning. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.
@conference{e73e80f175e440b0a20523123e160c6b,
title = "Communication algorithms via deep learning",
abstract = "Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.",
author = "Hyeji Kim and Yihan Jiang and Ranvir Rana and Sreeram Kannan and Sewoong Oh and Pramod Viswanath",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
note = "6th International Conference on Learning Representations, ICLR 2018 ; Conference date: 30-04-2018 Through 03-05-2018",

}

TY - CONF

T1 - Communication algorithms via deep learning

AU - Kim, Hyeji

AU - Jiang, Yihan

AU - Rana, Ranvir

AU - Kannan, Sreeram

AU - Oh, Sewoong

AU - Viswanath, Pramod

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.

AB - Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.

UR - http://www.scopus.com/inward/record.url?scp=85062946880&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062946880&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85062946880

ER -