Abstract
To battle against misinformation, social media platforms are experimenting with crowdsourced fact-checking---systems that rely on social media users' annotations of potentially misleading content. This paper focuses on Twitter's Community Notes to empirically investigate the efficacy of such systems in curbing misinformation. Utilizing two identification strategies---regression discontinuity design and instrumental variable analysis---we confirm the effectiveness of publicly displaying community notes on an author's voluntary tweet retraction, demonstrating the viability of crowdsourced fact-checking as an alternative to professional fact-checking and forcible content removal. Our findings also reveal that the effect is primarily driven by the author's consideration of misinformation's influence on users who have actively engaged with it (i.e., "observed influence") rather than those who might be exposed to it (i.e., "presumed influence"). We also uncover varying effects of publicly displayed community notes depending on specific tweet- and user-level characteristics, underscoring the importance of contextual factors to harness the full potential of crowdsourced fact-checking. Furthermore, results from discrete-time survival analyses show that publicly displaying community notes not only increases the probability of tweet retractions but also accelerates the retraction process among retracted, thereby improving platforms' responsiveness to emerging misinformation. This study offers important insights to both social media platforms and policymakers on the promise of crowdsourced fact-checking and calls for the broad participation of social media users to collectively tackle the problem of misinformation.
Original language | English (US) |
---|---|
Number of pages | 40 |
DOIs | |
State | Published - Oct 19 2024 |
Keywords
- Misinformation
- Fact-checking
- Content Moderation
- Crowdsourcing
- Community Notes