TY - GEN
T1 - Large Language Model-Guided Disentangled Belief Representation Learning on Polarized Social Graphs
AU - Li, Jinning
AU - Han, Ruipeng
AU - Sun, Chenkai
AU - Sun, Dachun
AU - Wang, Ruijie
AU - Zeng, Jingying
AU - Yan, Yuchen
AU - Tong, Hanghang
AU - Abdelzaher, Tarek
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The paper advances belief representation learning in polarized networks - the mapping of social beliefs espoused by users and posts in a polarized network into a disentangled latent space that separates (the members and beliefs of) each side. Our prior work embeds social interaction data, using non-negative variational graph auto-encoders, into a disentangled latent space. However, the interaction graphs alone may not adequately reflect similarity and/or disparity in beliefs, especially for those graphs with sparsity and outlier issues. In this paper, we investigate the impact of limited guidance from Large Language Models (LLMs) on the accuracy of belief separation. Specifically, we integrate social graphs with LLM-based soft labels as a novel weakly-supervised interpretable graph representation learning framework. This framework combines the strengths of graph-and text-based information, and is shown to maintain the interpretability of learned representations, where different axes in the latent space denote association with different sides of the divide. An evaluation on six real-world Twitter datasets illustrates the effectiveness of the proposed model at solving stance detection problems, demonstrating 5.9%-6.5% improvements in the accuracy, F1 score, and purity metrics, without introducing a significant computational overhead. An ablation study is also discussed to study the impact of different components of the proposed architecture.
AB - The paper advances belief representation learning in polarized networks - the mapping of social beliefs espoused by users and posts in a polarized network into a disentangled latent space that separates (the members and beliefs of) each side. Our prior work embeds social interaction data, using non-negative variational graph auto-encoders, into a disentangled latent space. However, the interaction graphs alone may not adequately reflect similarity and/or disparity in beliefs, especially for those graphs with sparsity and outlier issues. In this paper, we investigate the impact of limited guidance from Large Language Models (LLMs) on the accuracy of belief separation. Specifically, we integrate social graphs with LLM-based soft labels as a novel weakly-supervised interpretable graph representation learning framework. This framework combines the strengths of graph-and text-based information, and is shown to maintain the interpretability of learned representations, where different axes in the latent space denote association with different sides of the divide. An evaluation on six real-world Twitter datasets illustrates the effectiveness of the proposed model at solving stance detection problems, demonstrating 5.9%-6.5% improvements in the accuracy, F1 score, and purity metrics, without introducing a significant computational overhead. An ablation study is also discussed to study the impact of different components of the proposed architecture.
KW - Graph Auto-Encoders
KW - Interpretability
KW - Large Language Models
KW - Social Networks
KW - Weak Supervision
UR - http://www.scopus.com/inward/record.url?scp=85203239930&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203239930&partnerID=8YFLogxK
U2 - 10.1109/ICCCN61486.2024.10637650
DO - 10.1109/ICCCN61486.2024.10637650
M3 - Conference contribution
AN - SCOPUS:85203239930
T3 - Proceedings - International Conference on Computer Communications and Networks, ICCCN
BT - ICCCN 2024 - 2024 33rd International Conference on Computer Communications and Networks
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd International Conference on Computer Communications and Networks, ICCCN 2024
Y2 - 29 July 2024 through 31 July 2024
ER -