## Abstract

We consider the problem of approximate K-means clustering with outliers and side information provided by same-cluster queries and possibly noisy answers. Our solution shows that, under some mild assumptions on the smallest cluster size, one can obtain an (1 + ε)-approximation for the optimal potential with probability at least 1 − δ, where ε > 0 and δ ∈ (0, 1), using an expected number of O(^{K} ε_{δ}^{3} ) noiseless same-cluster queries and comparison-based clustering of complexity O(ndK +^{K} ε_{δ}^{3} ); here, n denotes the number of points and d the dimension of space. Compared to a handful of other known approaches that perform importance sampling to account for small cluster sizes, the proposed query technique reduces the number of queries by a factor of roughly O(^{K} ε_{3}^{6} ), at the cost of possibly missing very small clusters. We extend this settings to the case where some queries to the oracle produce erroneous information, and where certain points, termed outliers, do not belong to any clusters. Our proof techniques differ from previous methods used for K-means clustering analysis, as they rely on estimating the sizes of the clusters and the number of points needed for accurate centroid estimation and subsequent nontrivial generalizations of the double Dixie cup problem. We illustrate the performance of the proposed algorithm both on synthetic and real datasets, including MNIST and CIFAR 10.

Original language | English (US) |
---|---|

Pages (from-to) | 6649-6658 |

Number of pages | 10 |

Journal | Advances in Neural Information Processing Systems |

Volume | 2018-December |

State | Published - 2018 |

Externally published | Yes |

Event | 32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada Duration: Dec 2 2018 → Dec 8 2018 |

## ASJC Scopus subject areas

- Computer Networks and Communications
- Information Systems
- Signal Processing