Abstract
This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, 'social explanation' is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the 'social explanation' concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human-computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.
Original language | English (US) |
---|---|
Pages (from-to) | 6705-6726 |
Number of pages | 22 |
Journal | IEEE Transactions on Computational Social Systems |
Volume | 11 |
Issue number | 5 |
DOIs | |
State | Published - 2024 |
Keywords
- Explainable artificial intelligence (XAI)
- misinformation
- social explanation
- sociocontextual cue
ASJC Scopus subject areas
- Modeling and Simulation
- Social Sciences (miscellaneous)
- Human-Computer Interaction