Tight bounds for collaborative PAC learning via multiplicative weights

Jiecao Chen, Qin Zhang, Yuan Zhou

Research output: Contribution to journalConference article

Abstract

We study the collaborative PAC learning problem recently proposed in Blum et al. [3], in which we have k players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead O(ln k), improving the one with overhead O(ln2 k) in [3]. We also show that an Ω(ln k) overhead is inevitable when k is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al. [3] on real-world datasets.

Original languageEnglish (US)
Pages (from-to)3598-3607
Number of pages10
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - Jan 1 2018
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

Fingerprint

Learning algorithms
Polynomials

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Tight bounds for collaborative PAC learning via multiplicative weights. / Chen, Jiecao; Zhang, Qin; Zhou, Yuan.

In: Advances in Neural Information Processing Systems, Vol. 2018-December, 01.01.2018, p. 3598-3607.

Research output: Contribution to journalConference article

@article{8fd25f1877e242a3a0b75dbe66c7d189,
title = "Tight bounds for collaborative PAC learning via multiplicative weights",
abstract = "We study the collaborative PAC learning problem recently proposed in Blum et al. [3], in which we have k players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead O(ln k), improving the one with overhead O(ln2 k) in [3]. We also show that an Ω(ln k) overhead is inevitable when k is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al. [3] on real-world datasets.",
author = "Jiecao Chen and Qin Zhang and Yuan Zhou",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-December",
pages = "3598--3607",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

TY - JOUR

T1 - Tight bounds for collaborative PAC learning via multiplicative weights

AU - Chen, Jiecao

AU - Zhang, Qin

AU - Zhou, Yuan

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We study the collaborative PAC learning problem recently proposed in Blum et al. [3], in which we have k players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead O(ln k), improving the one with overhead O(ln2 k) in [3]. We also show that an Ω(ln k) overhead is inevitable when k is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al. [3] on real-world datasets.

AB - We study the collaborative PAC learning problem recently proposed in Blum et al. [3], in which we have k players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead O(ln k), improving the one with overhead O(ln2 k) in [3]. We also show that an Ω(ln k) overhead is inevitable when k is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al. [3] on real-world datasets.

UR - http://www.scopus.com/inward/record.url?scp=85064839438&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064839438&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85064839438

VL - 2018-December

SP - 3598

EP - 3607

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -