Preferences for Gender Stereotypicality in Artificial Intelligence: Existence, Comparison to Human Biases, and Implications for Choice

Julia Spielmann, Chadly Stern

Research output: Contribution to journalArticlepeer-review

Abstract

Do people prefer that artificial intelligence (AI) aligns with gender stereotypes when requesting help to answer a question? We found that people preferred gender stereotypicality (over counterstereotypicality and androgyny) in voice-based AI when seeking help (e.g., preferring feminine voices to answer questions in feminine domains; Studies 1a–1b). Preferences for stereotypicality were stronger when using binary zero-sum (vs. continuous non-zero-sum) assessments (Study 2). Contrary to expectations, biases were larger when judging human (vs. AI) targets (Study 3). Finally, people were more likely to request (vs. decline) assistance from gender stereotypical (vs. counterstereotypical) human targets, but this choice bias did not extend to AI targets (Study 4). Across studies, we observed stronger preferences for gender stereotypicality in feminine (vs. masculine) domains, potentially due to examining biases in a stereotypically feminine context (helping). These studies offer nuanced insights into conditions under which people use gender stereotypes to evaluate human and non-human entities.

Original languageEnglish (US)
JournalPersonality and social psychology bulletin
DOIs
StateAccepted/In press - 2024

Keywords

  • artificial intelligence
  • gender
  • help
  • stereotyping

ASJC Scopus subject areas

  • Social Psychology

Fingerprint

Dive into the research topics of 'Preferences for Gender Stereotypicality in Artificial Intelligence: Existence, Comparison to Human Biases, and Implications for Choice'. Together they form a unique fingerprint.

Cite this