TY - GEN
T1 - Learning to rankwith bregman divergences and monotone retargeting
AU - Acharyya, Sreangsu
AU - Koyejo, Oluwasanmi
AU - Ghosh, Joydeep
PY - 2012
Y1 - 2012
N2 - This paper introduces a novel approach for learning to rank (LETOR) based on the notion of monotone retargeting. It involves minimizing a divergence between all monotonic increasing transformations of the training scores and a parameterized prediction function. The minimization is both over the transformations as well as over the parameters. It is applied to Bregman divergences, a large class of distance like functions that were recently shown to be the unique class that is statistically consistent with the normalized discounted gain (NDCG) criterion [19]. The algorithm uses alternating projection style updates, in which one set of simultaneous projections can be computed independent of the Bregman divergence and the other reduces to parameter estimation of a generalized linear model. This results in easily implemented, efficiently parallelizable algorithm for the LETOR task that enjoys global optimum guarantees under mild conditions. We present empirical results on benchmark datasets showing that this approach can outperform the state of the art NDCG consistent techniques.
AB - This paper introduces a novel approach for learning to rank (LETOR) based on the notion of monotone retargeting. It involves minimizing a divergence between all monotonic increasing transformations of the training scores and a parameterized prediction function. The minimization is both over the transformations as well as over the parameters. It is applied to Bregman divergences, a large class of distance like functions that were recently shown to be the unique class that is statistically consistent with the normalized discounted gain (NDCG) criterion [19]. The algorithm uses alternating projection style updates, in which one set of simultaneous projections can be computed independent of the Bregman divergence and the other reduces to parameter estimation of a generalized linear model. This results in easily implemented, efficiently parallelizable algorithm for the LETOR task that enjoys global optimum guarantees under mild conditions. We present empirical results on benchmark datasets showing that this approach can outperform the state of the art NDCG consistent techniques.
UR - http://www.scopus.com/inward/record.url?scp=84886039643&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84886039643&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84886039643
SN - 9780974903989
T3 - Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012
SP - 16
EP - 25
BT - Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012
T2 - 28th Conference on Uncertainty in Artificial Intelligence, UAI 2012
Y2 - 15 August 2012 through 17 August 2012
ER -