TY - JOUR
T1 - Preferences for Gender Stereotypicality in Artificial Intelligence
T2 - Existence, Comparison to Human Biases, and Implications for Choice
AU - Spielmann, Julia
AU - Stern, Chadly
N1 - Publisher Copyright:
© 2024 by the Society for Personality and Social Psychology, Inc.
PY - 2024
Y1 - 2024
N2 - Do people prefer that artificial intelligence (AI) aligns with gender stereotypes when requesting help to answer a question? We found that people preferred gender stereotypicality (over counterstereotypicality and androgyny) in voice-based AI when seeking help (e.g., preferring feminine voices to answer questions in feminine domains; Studies 1a–1b). Preferences for stereotypicality were stronger when using binary zero-sum (vs. continuous non-zero-sum) assessments (Study 2). Contrary to expectations, biases were larger when judging human (vs. AI) targets (Study 3). Finally, people were more likely to request (vs. decline) assistance from gender stereotypical (vs. counterstereotypical) human targets, but this choice bias did not extend to AI targets (Study 4). Across studies, we observed stronger preferences for gender stereotypicality in feminine (vs. masculine) domains, potentially due to examining biases in a stereotypically feminine context (helping). These studies offer nuanced insights into conditions under which people use gender stereotypes to evaluate human and non-human entities.
AB - Do people prefer that artificial intelligence (AI) aligns with gender stereotypes when requesting help to answer a question? We found that people preferred gender stereotypicality (over counterstereotypicality and androgyny) in voice-based AI when seeking help (e.g., preferring feminine voices to answer questions in feminine domains; Studies 1a–1b). Preferences for stereotypicality were stronger when using binary zero-sum (vs. continuous non-zero-sum) assessments (Study 2). Contrary to expectations, biases were larger when judging human (vs. AI) targets (Study 3). Finally, people were more likely to request (vs. decline) assistance from gender stereotypical (vs. counterstereotypical) human targets, but this choice bias did not extend to AI targets (Study 4). Across studies, we observed stronger preferences for gender stereotypicality in feminine (vs. masculine) domains, potentially due to examining biases in a stereotypically feminine context (helping). These studies offer nuanced insights into conditions under which people use gender stereotypes to evaluate human and non-human entities.
KW - artificial intelligence
KW - gender
KW - help
KW - stereotyping
UR - http://www.scopus.com/inward/record.url?scp=85213700402&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213700402&partnerID=8YFLogxK
U2 - 10.1177/01461672241307276
DO - 10.1177/01461672241307276
M3 - Article
AN - SCOPUS:85213700402
SN - 0146-1672
JO - Personality and social psychology bulletin
JF - Personality and social psychology bulletin
ER -