Abstract
Kulesza et al. [2014] recently observed that low-rank spectral learning algorithms, which discard the smallest singular values of a moment matrix during training, can behave in unexpected ways, producing large errors even when the discarded singular values are arbitrarily small. In this paper we prove that when learning predictive state representations those problematic cases disappear if we introduce a particular weighted loss function and learn using sufficiently large sets of statistics; our main result is a bound on the loss of the learned low-rank model in terms of the singular values that are discarded. Practically speaking, this suggests that regardless of the model rank we should use the largest possible sets of statistics, and we show empirically that this is true on both synthetic and real-world domains.
Original language | English (US) |
---|---|
Pages (from-to) | 517-525 |
Number of pages | 9 |
Journal | Journal of Machine Learning Research |
Volume | 38 |
State | Published - 2015 |
Externally published | Yes |
Event | 18th International Conference on Artificial Intelligence and Statistics, AISTATS 2015 - San Diego, United States Duration: May 9 2015 → May 12 2015 |
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence