TY - JOUR
T1 - Learning-to-Rank with Partitioned Preference
T2 - 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021
AU - Ma, Jiaqi
AU - Yi, Xinyang
AU - Tang, Weijing
AU - Zhao, Zhe
AU - Hong, Lichan
AU - Chi, Ed H.
AU - Mei, Qiaozhu
N1 - Funding Information:
The authors would like to thank Ao Liu, Tyler Lu, Lirong Xia, and Zhibing Zhao for helpful discussions. Jiaqi Ma and Qiaozhu Mei were in part supported by the National Science Foundation under grant numbers 1633370 and 1620319.
Publisher Copyright:
Copyright © 2021 by the author(s)
PY - 2021
Y1 - 2021
N2 - We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown. Given N items with M partitions, calculating the likelihood of data with partitioned preference under the PL model has a time complexity of O(N+S!), where S is the maximum size of the top M−1 partitions. This computational challenge restrains most existing PL-based listwise LTR methods to a special case of partitioned preference, top-K ranking, where the exact order of the top K items is known. In this paper, we exploit a random utility model formulation of the PL model, and propose an efficient numerical integration approach for calculating the likelihood and its gradients with a time complexity O(N+S3). We demonstrate that the proposed method outperforms well-known LTR baselines and remains scalable through both simulation experiments and applications to real-world eXtreme Multi-Label classification tasks.
AB - We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown. Given N items with M partitions, calculating the likelihood of data with partitioned preference under the PL model has a time complexity of O(N+S!), where S is the maximum size of the top M−1 partitions. This computational challenge restrains most existing PL-based listwise LTR methods to a special case of partitioned preference, top-K ranking, where the exact order of the top K items is known. In this paper, we exploit a random utility model formulation of the PL model, and propose an efficient numerical integration approach for calculating the likelihood and its gradients with a time complexity O(N+S3). We demonstrate that the proposed method outperforms well-known LTR baselines and remains scalable through both simulation experiments and applications to real-world eXtreme Multi-Label classification tasks.
UR - http://www.scopus.com/inward/record.url?scp=85141080813&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141080813&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85141080813
SN - 2640-3498
VL - 130
SP - 928
EP - 936
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 13 April 2021 through 15 April 2021
ER -