Abstract
An information-theoretic upper bound on the generalization error of supervised learning algorithms is derived. The bound is constructed in terms of the mutual information between each individual training sample and the output of the learning algorithm. The bound is derived under more general conditions on the loss function than in existing studies; nevertheless, it provides a tighter characterization of the generalization error. Examples of learning algorithms are provided to demonstrate the tightness of the bound, and to show that it has a broad range of applicability. Application to noisy and iterative algorithms, e.g., stochastic gradient Langevin dynamics (SGLD), is also studied, where the constructed bound provides a tighter characterization of the generalization error than existing results. Finally, it is demonstrated that, unlike existing bounds, which are difficult to compute and evaluate empirically, the proposed bound can be estimated easily in practice.
Original language | English (US) |
---|---|
Article number | 2991139 |
Pages (from-to) | 121-130 |
Number of pages | 10 |
Journal | IEEE Journal on Selected Areas in Information Theory |
Volume | 1 |
Issue number | 1 |
DOIs | |
State | Published - May 2020 |
Keywords
- Cumulant generating function
- Generalization error
- Information-theoretic bounds
- Stochastic gradient Langevin dynamics
ASJC Scopus subject areas
- Computer Networks and Communications
- Media Technology
- Artificial Intelligence
- Applied Mathematics