Scalable kernel methods via doubly stochastic gradients

Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria Florina Balcan, Le Song

Research output: Contribution to journalConference article

Abstract

The general perception is that kernel methods are not scalable, so neural nets become the choice for large-scale nonlinear learning problems. Have we tried hard enough for kernel methods? In this paper, we propose an approach that scales up kernel methods using a novel concept called "doubly stochastic functional gradients". Based on the fact that many kernel methods can be expressed as convex optimization problems, our approach solves the optimization problems by making two unbiased stochastic approximations to the functional gradient - one using random training points and another using random features associated with the kernel - and performing descent steps with this noisy functional gradient. Our algorithm is simple, need no commit to a preset number of random features, and allows the flexibility of the function class to grow as we see more incoming data in the streaming setting. We demonstrate that a function learned by this procedure after t iterations converges to the optimal function in the reproducing kernel Hilbert space in rate O(1/t), and achieves a generalization bound of O(1/√t). Our approach can readily scale kernel methods up to the regimes which are dominated by neural nets. We show competitive performances of our approach as compared to neural nets in datasets such as 2.3 million energy materials from MolecularSpace, 8 million handwritten digits from MNIST, and 1 million photos from ImageNet using convolution features.

Original languageEnglish (US)
Pages (from-to)3041-3049
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume4
Issue numberJanuary
StatePublished - Jan 1 2014
Externally publishedYes
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: Dec 8 2014Dec 13 2014

    Fingerprint

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M. F., & Song, L. (2014). Scalable kernel methods via doubly stochastic gradients. Advances in Neural Information Processing Systems, 4(January), 3041-3049.