Learning deep l0 encoders

Zhangyang Wang, Qing Ling, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite its nonconvex nature, l0 sparse approximation is desirable in many theoretical and application cases. We study the l 0 sparse approximation problem with the tool of deep learning, by proposing Deep l0 Encoders. Two typical forms, the l0 regularized problem and the M-sparse problem, are investigated. Based on solid iterative algorithms, we model them as feed-forward neural networks, through introducing novel neurons and pooling functions. Enforcing such structural priors acts as an effective network regularization. The deep encoders also enjoy faster inference, larger learning capacity, and better scalability compared to conventional sparse coding solutions. Furthermore, under task-driven losses, the models can be conveniently optimized from end to end. Numerical results demonstrate the impressive performances of the proposed encoders.

Original languageEnglish (US)
Title of host publication30th AAAI Conference on Artificial Intelligence, AAAI 2016
PublisherAmerican Association for Artificial Intelligence (AAAI) Press
Pages2194-2200
Number of pages7
ISBN (Electronic)9781577357605
StatePublished - 2016
Event30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States
Duration: Feb 12 2016Feb 17 2016

Publication series

Name30th AAAI Conference on Artificial Intelligence, AAAI 2016

Other

Other30th AAAI Conference on Artificial Intelligence, AAAI 2016
Country/TerritoryUnited States
CityPhoenix
Period2/12/162/17/16

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning deep l0 encoders'. Together they form a unique fingerprint.

Cite this