Abstract
We study the problem of sequential prediction with coded side information under logarithmic loss (log-loss). We show an operational equivalence between this setup and lossy compression with log-loss distortion. Using this insight, together with recent work on lossy compression with log-loss, we connect prediction strategies with distributions in a certain subset of the probability simplex. This allows us to derive a Shtarkov-like bound for regret and to evaluate the regret for several illustrative classes of experts. In the present work, we mainly focus on the “batch” side information setting with sequential prediction.
Original language | English (US) |
---|---|
Pages (from-to) | 753-769 |
Number of pages | 17 |
Journal | Proceedings of Machine Learning Research |
Volume | 83 |
State | Published - 2018 |
Event | 29th International Conference on Algorithmic Learning Theory, ALT 2018 - Lanzarote, Spain Duration: Apr 7 2018 → Apr 9 2018 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability