Abstract
We consider the problem of clustering data sets in the presence of arbitrary outliers. Traditional clustering algorithms such as k-means and spectral clustering are known to perform poorly for data sets contaminated with even a small number of outliers. In this paper, we develop a provably robust spectral clustering algorithm that applies a simple rounding scheme to denoise a Gaussian kernel matrix built from the data points and uses vanilla spectral clustering to recover the cluster labels of data points. We analyze the performance of our algorithm under the assumption that the “good” data points are generated from a mixture of sub-Gaussians (we term these “inliers”), whereas the outlier points can come from any arbitrary probability distribution. For this general class of models, we show that the misclassification error decays at an exponential rate in the signal-to-noise ratio, provided the number of outliers is a small fraction of the inlier points. Surprisingly, this derived error bound matches with the best-known bound for semidefinite programs (SDPs) under the same setting without outliers. We conduct extensive experiments on a variety of simulated and real-world data sets to demonstrate that our algorithm is less sensitive to outliers compared with other state-of-the-art algorithms proposed in the literature.
Original language | English (US) |
---|---|
Pages (from-to) | 224-244 |
Number of pages | 21 |
Journal | Operations Research |
Volume | 71 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2023 |
Externally published | Yes |
Keywords
- asymptotic analysis
- kernel methods
- outlier detection
- semidefinite programming
- spectral clustering
- sub-Gaussian mixture models
ASJC Scopus subject areas
- Computer Science Applications
- Management Science and Operations Research