An approach to one-bit compressed sensing based on probably approximately correct learning theory

Mehmet Eren Ahsen, Mathukumalli Vidyasagar

Research output: Contribution to journalArticle

Abstract

In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the VapnikChervonenkis (VC-) dimension of the set of half-spaces in Rn generated by k-sparse vectors is bounded below by k(blg(n=k)c + 1) and above by b2k lg(en)c. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a k-sparse vector with O(k lg n) measurements, given only the signs of the measurement vector. This result holds for all probability measures on Rn. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume20
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Compressed sensing
Learning Theory
Compressed Sensing
Half-space
Probability Measure
Labels
Unknown
Estimate

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Cite this

An approach to one-bit compressed sensing based on probably approximately correct learning theory. / Ahsen, Mehmet Eren; Vidyasagar, Mathukumalli.

In: Journal of Machine Learning Research, Vol. 20, 01.01.2019.

Research output: Contribution to journalArticle

@article{658decb459b2484298f56ee54ef22cbe,
title = "An approach to one-bit compressed sensing based on probably approximately correct learning theory",
abstract = "In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the VapnikChervonenkis (VC-) dimension of the set of half-spaces in Rn generated by k-sparse vectors is bounded below by k(blg(n=k)c + 1) and above by b2k lg(en)c. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a k-sparse vector with O(k lg n) measurements, given only the signs of the measurement vector. This result holds for all probability measures on Rn. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.",
author = "Ahsen, {Mehmet Eren} and Mathukumalli Vidyasagar",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
volume = "20",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

TY - JOUR

T1 - An approach to one-bit compressed sensing based on probably approximately correct learning theory

AU - Ahsen, Mehmet Eren

AU - Vidyasagar, Mathukumalli

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the VapnikChervonenkis (VC-) dimension of the set of half-spaces in Rn generated by k-sparse vectors is bounded below by k(blg(n=k)c + 1) and above by b2k lg(en)c. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a k-sparse vector with O(k lg n) measurements, given only the signs of the measurement vector. This result holds for all probability measures on Rn. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.

AB - In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the VapnikChervonenkis (VC-) dimension of the set of half-spaces in Rn generated by k-sparse vectors is bounded below by k(blg(n=k)c + 1) and above by b2k lg(en)c. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a k-sparse vector with O(k lg n) measurements, given only the signs of the measurement vector. This result holds for all probability measures on Rn. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.

UR - http://www.scopus.com/inward/record.url?scp=85072513935&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072513935&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:85072513935

VL - 20

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -