TY - JOUR
T1 - ArieL
T2 - Adversarial Graph Contrastive Learning
AU - Feng, Shengyu
AU - Jing, Baoyu
AU - Zhu, Yada
AU - Tong, Hanghang
N1 - This work is supported by the National Science Foundation (grant nos. 1947135, 2134079, 2316233, and 2324770), the National Science Foundation Program on Fairness in AI in collaboration with Amazon (grant no. 1939725), DARPA (grant no. HR001121C0165), NIFA (grant no. 2020-67021-32799), DHS (grant no. 17STQAC00001-07-00), ARO (grant no. W911NF2110088), the C3.ai Digital Transformation Institute, MIT-IBM Watson AI Lab, and IBM-Illinois Discovery Accelerator Institute. The content of the information in this document does not necessarily reflect the position or the policy of the Government or Amazon, and no official endorsement should be inferred. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
This work is supported by the National Science Foundation (grant nos. 1947135, 2134079, 2316233, and 2324770), the National Science Foundation Program on Fairness in AI in collaboration with Amazon (grant no. 1939725), DARPA (grant no. HR001121C0165), NIFA (grant no. 2020-67021-32799), DHS (grant no. 17STQAC00001-07-00), ARO (grant no. W911NF2110088), the C3.ai Digital Transformation Institute, MIT-IBM Watson AI Lab, and IBM-Illinois Discovery Accelerator Institute.
PY - 2024/2/12
Y1 - 2024/2/12
N2 - Contrastive learning is an effective unsupervised method in graph representation learning. The key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data-augmentation-based contrastive learning method has advanced to show great power in the visual domain, and some works have extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and it is much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ArieL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph level by treating each graph instance as a super-node. ArieL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ArieL is more robust in the face of adversarial attacks.
AB - Contrastive learning is an effective unsupervised method in graph representation learning. The key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data-augmentation-based contrastive learning method has advanced to show great power in the visual domain, and some works have extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and it is much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ArieL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph level by treating each graph instance as a super-node. ArieL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ArieL is more robust in the face of adversarial attacks.
KW - Graph representation learning
KW - adversarial training
KW - contrastive learning
KW - mutual information
UR - http://www.scopus.com/inward/record.url?scp=85185728149&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185728149&partnerID=8YFLogxK
U2 - 10.1145/3638054
DO - 10.1145/3638054
M3 - Article
AN - SCOPUS:85185728149
SN - 1556-4681
VL - 18
JO - ACM Transactions on Knowledge Discovery from Data
JF - ACM Transactions on Knowledge Discovery from Data
IS - 4
M1 - 82
ER -