Clustered support vector machines

Quanquan Gu, Jiawei Han

Research output: Contribution to journalConference articlepeer-review

Abstract

In many problems of machine learning, the data are distributed nonlinearly. One way to address this kind of data is training a nonlinear classifier such as kernel support vector machine (kernel SVM). However, the computational burden of kernel SVM limits its application to large scale datasets. In this paper, we propose a Clustered Support Vector Machine (CSVM), which tackles the data in a divide and conquer manner. More specifically, CSVM groups the data into several clusters, followed which it trains a linear support vector machine in each cluster to separate the data locally. Meanwhile, CSVM has an additional global regularization, which requires the weight vector of each local linear SVM aligning with a global weight vector. The global regularization leverages the information from one cluster to another, and avoids over-fitting in each cluster. We derive a data-dependent generalization error bound for CSVM, which explains the advantage of CSVM over linear SVM. Experiments on several benchmark datasets show that the proposed method outperforms linear SVM and some other related locally linear classifiers. It is also comparable to a fine-tuned kernel SVM in terms of prediction performance, while it is more efficient than kernel SVM.

Original languageEnglish (US)
Pages (from-to)307-315
Number of pages9
JournalJournal of Machine Learning Research
Volume31
StatePublished - 2013
Event16th International Conference on Artificial Intelligence and Statistics, AISTATS 2013 - Scottsdale, United States
Duration: Apr 29 2013May 1 2013

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Clustered support vector machines'. Together they form a unique fingerprint.

Cite this