TY - JOUR
T1 - Integrating Social Explanations Into Explainable Artificial Intelligence (XAI) for Combating Misinformation
T2 - Vision and Challenges
AU - Gong, Yeaeun
AU - Shang, Lanyu
AU - Wang, Dong
N1 - This work was supported by the National Science Foundation under Grant IIS-2202481, Grant CHE-2105032, Grant IIS-2130263, Grant CNS-2131622, and Grant CNS-2140999.
PY - 2024
Y1 - 2024
N2 - This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, 'social explanation' is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the 'social explanation' concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human-computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.
AB - This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, 'social explanation' is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the 'social explanation' concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human-computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.
KW - Explainable artificial intelligence (XAI)
KW - misinformation
KW - social explanation
KW - sociocontextual cue
UR - http://www.scopus.com/inward/record.url?scp=85196535006&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85196535006&partnerID=8YFLogxK
U2 - 10.1109/TCSS.2024.3404236
DO - 10.1109/TCSS.2024.3404236
M3 - Article
AN - SCOPUS:85196535006
SN - 2329-924X
VL - 11
SP - 6705
EP - 6726
JO - IEEE Transactions on Computational Social Systems
JF - IEEE Transactions on Computational Social Systems
IS - 5
ER -