Collaboratively learning preferences from ordinal data

Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu

Research output: Contribution to journalConference article

Abstract

In personalized recommendation systems, it is important to predict preferences of a user on items that have not been seen by that user yet. Similarly, in revenue management, it is important to predict outcomes of comparisons among those items that have never been compared so far. The MultiNomial Logit model, a popular discrete choice model, captures the structure of the hidden preferences with a low-rank matrix. In order to predict the preferences, we want to learn the underlying model from noisy observations of the low-rank matrix, collected as revealed preferences in various forms of ordinal data. A natural approach to learn such a model is to solve a convex relaxation of nuclear norm minimization. We present the convex relaxation approach in two contexts of interest: collaborative ranking and bundled choice modeling. In both cases, we show that the convex relaxation is minimax optimal. We prove an upper bound on the resulting error with finite samples, and provide a matching information-theoretic lower bound.

Original languageEnglish (US)
Pages (from-to)1909-1917
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - Jan 1 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

Fingerprint

Recommender systems

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Oh, S., Thekumparampil, K. K., & Xu, J. (2015). Collaboratively learning preferences from ordinal data. Advances in Neural Information Processing Systems, 2015-January, 1909-1917.

Collaboratively learning preferences from ordinal data. / Oh, Sewoong; Thekumparampil, Kiran K.; Xu, Jiaming.

In: Advances in Neural Information Processing Systems, Vol. 2015-January, 01.01.2015, p. 1909-1917.

Research output: Contribution to journalConference article

Oh, S, Thekumparampil, KK & Xu, J 2015, 'Collaboratively learning preferences from ordinal data', Advances in Neural Information Processing Systems, vol. 2015-January, pp. 1909-1917.
Oh S, Thekumparampil KK, Xu J. Collaboratively learning preferences from ordinal data. Advances in Neural Information Processing Systems. 2015 Jan 1;2015-January:1909-1917.
Oh, Sewoong ; Thekumparampil, Kiran K. ; Xu, Jiaming. / Collaboratively learning preferences from ordinal data. In: Advances in Neural Information Processing Systems. 2015 ; Vol. 2015-January. pp. 1909-1917.
@article{9a8dd83f7a964369b3cc6ac4163db3cd,
title = "Collaboratively learning preferences from ordinal data",
abstract = "In personalized recommendation systems, it is important to predict preferences of a user on items that have not been seen by that user yet. Similarly, in revenue management, it is important to predict outcomes of comparisons among those items that have never been compared so far. The MultiNomial Logit model, a popular discrete choice model, captures the structure of the hidden preferences with a low-rank matrix. In order to predict the preferences, we want to learn the underlying model from noisy observations of the low-rank matrix, collected as revealed preferences in various forms of ordinal data. A natural approach to learn such a model is to solve a convex relaxation of nuclear norm minimization. We present the convex relaxation approach in two contexts of interest: collaborative ranking and bundled choice modeling. In both cases, we show that the convex relaxation is minimax optimal. We prove an upper bound on the resulting error with finite samples, and provide a matching information-theoretic lower bound.",
author = "Sewoong Oh and Thekumparampil, {Kiran K.} and Jiaming Xu",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
volume = "2015-January",
pages = "1909--1917",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

TY - JOUR

T1 - Collaboratively learning preferences from ordinal data

AU - Oh, Sewoong

AU - Thekumparampil, Kiran K.

AU - Xu, Jiaming

PY - 2015/1/1

Y1 - 2015/1/1

N2 - In personalized recommendation systems, it is important to predict preferences of a user on items that have not been seen by that user yet. Similarly, in revenue management, it is important to predict outcomes of comparisons among those items that have never been compared so far. The MultiNomial Logit model, a popular discrete choice model, captures the structure of the hidden preferences with a low-rank matrix. In order to predict the preferences, we want to learn the underlying model from noisy observations of the low-rank matrix, collected as revealed preferences in various forms of ordinal data. A natural approach to learn such a model is to solve a convex relaxation of nuclear norm minimization. We present the convex relaxation approach in two contexts of interest: collaborative ranking and bundled choice modeling. In both cases, we show that the convex relaxation is minimax optimal. We prove an upper bound on the resulting error with finite samples, and provide a matching information-theoretic lower bound.

AB - In personalized recommendation systems, it is important to predict preferences of a user on items that have not been seen by that user yet. Similarly, in revenue management, it is important to predict outcomes of comparisons among those items that have never been compared so far. The MultiNomial Logit model, a popular discrete choice model, captures the structure of the hidden preferences with a low-rank matrix. In order to predict the preferences, we want to learn the underlying model from noisy observations of the low-rank matrix, collected as revealed preferences in various forms of ordinal data. A natural approach to learn such a model is to solve a convex relaxation of nuclear norm minimization. We present the convex relaxation approach in two contexts of interest: collaborative ranking and bundled choice modeling. In both cases, we show that the convex relaxation is minimax optimal. We prove an upper bound on the resulting error with finite samples, and provide a matching information-theoretic lower bound.

UR - http://www.scopus.com/inward/record.url?scp=84965136599&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965136599&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84965136599

VL - 2015-January

SP - 1909

EP - 1917

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -