Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model

Jiaqi Ma, Xinyang Yi, Weijing Tang, Zhe Zhao, Lichan Hong, Ed H. Chi, Qiaozhu Mei

Research output: Contribution to journalConference articlepeer-review

Abstract

We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown. Given N items with M partitions, calculating the likelihood of data with partitioned preference under the PL model has a time complexity of O(N+S!), where S is the maximum size of the top M−1 partitions. This computational challenge restrains most existing PL-based listwise LTR methods to a special case of partitioned preference, top-K ranking, where the exact order of the top K items is known. In this paper, we exploit a random utility model formulation of the PL model, and propose an efficient numerical integration approach for calculating the likelihood and its gradients with a time complexity O(N+S3). We demonstrate that the proposed method outperforms well-known LTR baselines and remains scalable through both simulation experiments and applications to real-world eXtreme Multi-Label classification tasks.

Original languageEnglish (US)
Pages (from-to)928-936
Number of pages9
JournalProceedings of Machine Learning Research
Volume130
StatePublished - 2021
Externally publishedYes
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: Apr 13 2021Apr 15 2021

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model'. Together they form a unique fingerprint.

Cite this