TY - JOUR
T1 - Efficient task specific data valuation for nearest neighbor algorithms
AU - Jia, Ruoxi
AU - Dao, David
AU - Wang, Boxin
AU - Hubis, Frances Ann
AU - Gurel, Nezihe Merve
AU - Li, Bo
AU - Zhang, Ce
AU - Spanos, Costas
AU - Song, Dawn
N1 - Publisher Copyright:
© 2019, is held by the owner/author(s).
PY - 2018
Y1 - 2018
N2 - Given a data set D containing millions of data points and a data consumer who is willing to pay for $X to train a machine learning (ML) model over D, how should we distribute this $X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2N) model evaluations for exact computation and O(N logN) for (j; δ)-approximation. In this paper, we focus on one popular family of ML models relying on K-nearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N logN) time-an exponential improvement on computational complexity! Moreover, for (j; δ)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O(Nh(j;K) logN) when j is not too small and K is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further. We then extend our algorithm to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation. Some of these extensions, although also being improved exponentially, are less practical for exact computation (e.g., O(NK) complexity for weigthed KNN). We thus propose an Monte Carlo approximation algorithm, which is O(N(logN)2=(logK)2) times more efficient than the baseline approximation algorithm.
AB - Given a data set D containing millions of data points and a data consumer who is willing to pay for $X to train a machine learning (ML) model over D, how should we distribute this $X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2N) model evaluations for exact computation and O(N logN) for (j; δ)-approximation. In this paper, we focus on one popular family of ML models relying on K-nearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N logN) time-an exponential improvement on computational complexity! Moreover, for (j; δ)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O(Nh(j;K) logN) when j is not too small and K is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further. We then extend our algorithm to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation. Some of these extensions, although also being improved exponentially, are less practical for exact computation (e.g., O(NK) complexity for weigthed KNN). We thus propose an Monte Carlo approximation algorithm, which is O(N(logN)2=(logK)2) times more efficient than the baseline approximation algorithm.
UR - http://www.scopus.com/inward/record.url?scp=85081334606&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081334606&partnerID=8YFLogxK
U2 - 10.14778/3342263.3342637
DO - 10.14778/3342263.3342637
M3 - Conference article
AN - SCOPUS:85081334606
SN - 2150-8097
VL - 12
SP - 1610
EP - 1623
JO - Proceedings of the VLDB Endowment
JF - Proceedings of the VLDB Endowment
IS - 11
T2 - 45th International Conference on Very Large Data Bases, VLDB 2019
Y2 - 26 August 2017 through 30 August 2017
ER -