How do people sort by ratings?

Jerry O. Talton, Krishna Dusad, Konstantinos Koiliaris, Ranjitha Kumar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.

Original languageEnglish (US)
Title of host publicationCHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450359702
DOIs
StatePublished - May 2 2019
Event2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 - Glasgow, United Kingdom
Duration: May 4 2019May 9 2019

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI 2019
CountryUnited Kingdom
CityGlasgow
Period5/4/195/9/19

Fingerprint

Sorting
Lenses
Sampling
Industry
Uncertainty

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design
  • Software

Cite this

Talton, J. O., Dusad, K., Koiliaris, K., & Kumar, R. (2019). How do people sort by ratings? In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300535

How do people sort by ratings? / Talton, Jerry O.; Dusad, Krishna; Koiliaris, Konstantinos; Kumar, Ranjitha.

CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 2019. (Conference on Human Factors in Computing Systems - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Talton, JO, Dusad, K, Koiliaris, K & Kumar, R 2019, How do people sort by ratings? in CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Conference on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery, 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, United Kingdom, 5/4/19. https://doi.org/10.1145/3290605.3300535
Talton JO, Dusad K, Koiliaris K, Kumar R. How do people sort by ratings? In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery. 2019. (Conference on Human Factors in Computing Systems - Proceedings). https://doi.org/10.1145/3290605.3300535
Talton, Jerry O. ; Dusad, Krishna ; Koiliaris, Konstantinos ; Kumar, Ranjitha. / How do people sort by ratings?. CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 2019. (Conference on Human Factors in Computing Systems - Proceedings).
@inproceedings{cca6cd3cf10f476fa7b3c2f56517f1a4,
title = "How do people sort by ratings?",
abstract = "Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.",
author = "Talton, {Jerry O.} and Krishna Dusad and Konstantinos Koiliaris and Ranjitha Kumar",
year = "2019",
month = "5",
day = "2",
doi = "10.1145/3290605.3300535",
language = "English (US)",
series = "Conference on Human Factors in Computing Systems - Proceedings",
publisher = "Association for Computing Machinery",
booktitle = "CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems",

}

TY - GEN

T1 - How do people sort by ratings?

AU - Talton, Jerry O.

AU - Dusad, Krishna

AU - Koiliaris, Konstantinos

AU - Kumar, Ranjitha

PY - 2019/5/2

Y1 - 2019/5/2

N2 - Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.

AB - Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.

UR - http://www.scopus.com/inward/record.url?scp=85067630591&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067630591&partnerID=8YFLogxK

U2 - 10.1145/3290605.3300535

DO - 10.1145/3290605.3300535

M3 - Conference contribution

AN - SCOPUS:85067630591

T3 - Conference on Human Factors in Computing Systems - Proceedings

BT - CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems

PB - Association for Computing Machinery

ER -