TY - GEN
T1 - It's Trying Too Hard to Look Real
T2 - 2024 CHI Conference on Human Factors in Computing Sytems, CHI 2024
AU - Mink, Jaron
AU - Wei, Miranda
AU - Munyendo, Collins W.
AU - Hugenberg, Kurt
AU - Kohno, Tadayoshi
AU - Redmiles, Elissa M.
AU - Wang, Gang
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s)
PY - 2024/5/11
Y1 - 2024/5/11
N2 - Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity groups; however, no work has yet systematically investigated such mistakes and biases. We conducted a user study (n=695) that investigates how 1) the identity of the profile, 2) whether the moderator shares that identity, and 3) components of a profile shown affect the perceived artificiality of the profile. We find statistically significant biases in people's moderation of LinkedIn profiles based on all three factors. Further, upon examining how moderators make decisions, we find they rely on mental models of AI and attackers, as well as typicality expectations (how they think the world works). The latter includes reliance on race/gender stereotypes. Based on our findings, we synthesize recommendations for the design of moderation interfaces, moderation teams, and security training.
AB - Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity groups; however, no work has yet systematically investigated such mistakes and biases. We conducted a user study (n=695) that investigates how 1) the identity of the profile, 2) whether the moderator shares that identity, and 3) components of a profile shown affect the perceived artificiality of the profile. We find statistically significant biases in people's moderation of LinkedIn profiles based on all three factors. Further, upon examining how moderators make decisions, we find they rely on mental models of AI and attackers, as well as typicality expectations (how they think the world works). The latter includes reliance on race/gender stereotypes. Based on our findings, we synthesize recommendations for the design of moderation interfaces, moderation teams, and security training.
KW - Bias
KW - Content Moderation
KW - Deepfakes
KW - Mental Models of AI
UR - http://www.scopus.com/inward/record.url?scp=85194845347&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194845347&partnerID=8YFLogxK
U2 - 10.1145/3613904.3641999
DO - 10.1145/3613904.3641999
M3 - Conference contribution
AN - SCOPUS:85194845347
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2024 - Proceedings of the 2024 CHI Conference on Human Factors in Computing Sytems
PB - Association for Computing Machinery
Y2 - 11 May 2024 through 16 May 2024
ER -