EDoG: Adversarial Edge Detection for Graph Neural Networks

Xiaojun Xu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node (or subgraph) classification prediction by adding subtle perturbations. In particular, several attacks against GNNs have been proposed by adding/deleting a small amount of edges, which have caused serious security concerns. Detecting these attacks is challenging due to the small magnitude of perturbation and the discrete nature of graph data. In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation. Specifically, we propose a novel graph generation approach combined with link prediction to detect suspicious adversarial edges. To effectively train the graph generative model, we sample several sub-graphs from the given graph data. We show that since the number of adversarial edges is usually low in practice, with low probability the sampled sub-graphs will contain adversarial edges based on the union bound. In addition, considering the strong attacks which perturb a large number of edges, we propose a set of novel features to perform outlier detection as the preprocessing for our detection. Extensive experimental results on three real-world graph datasets including a private transaction rule dataset from a major company and two types of synthetic graphs with controlled properties (e.g., Erdos-Renyi and scale-free graphs) show that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack strategies without requiring any knowledge about the attack type (e.g., degree of the target victim node); and around 0.85 with knowledge of the attack type. EDoG significantly outperforms traditional malicious edge detection baselines. We also show that an adaptive attack with full knowledge of our detection pipeline is difficult to bypass it. Our results shed light on several principles to improve the robustness of GNNs.

Original languageEnglish (US)
Title of host publicationProceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages291-305
Number of pages15
ISBN (Electronic)9781665462990
DOIs
StatePublished - 2023
Event2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023 - Raleigh, United States
Duration: Feb 8 2023Feb 10 2023

Publication series

NameProceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023

Conference

Conference2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
Country/TerritoryUnited States
CityRaleigh
Period2/8/232/10/23

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Safety, Risk, Reliability and Quality
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'EDoG: Adversarial Edge Detection for Graph Neural Networks'. Together they form a unique fingerprint.

Cite this