Geometric approach to train support vector machines

Ming Hsuan Yang, Narendra Ahuja

Research output: Contribution to journalConference articlepeer-review


Support Vector Machines (SVMs) have shown great potential in numerous visual learning and pattern recognition problems. The optimal decision surface of a SVM is constructed from its support vectors which are conventionally determined by solving a quadratic programming (QP) problem. However, solving a large optimization problem is challenging since it is computationally intensive and the memory requirement grows with square of the training vectors. In this paper, we propose a geometric method to extract a small superset of support vectors, which we call guard vectors, to construct the optimal decision surface. Specifically, the guard vectors are found by solving a set of linear programming problems. Experimental results on synthetic and real data sets show that the proposed method is more efficient than conventional methods using QPs and requires much less memory.

Original languageEnglish (US)
Pages (from-to)430-437
Number of pages8
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
StatePublished - 2000
EventIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000 - Hilton Head Island, SC, USA
Duration: Jun 13 2000Jun 15 2000

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Geometric approach to train support vector machines'. Together they form a unique fingerprint.

Cite this