Abstract
Graph learning-based collaborative filtering (GLCF), which is built upon the message-passing mechanism of graph neural networks (GNNs), has received great recent attention and exhibited superior performance in recommender systems. However, although GNNs can be easily compromised by adversarial attacks as shown by the prior work, little attention has been paid to the vulnerability of GLCF. Questions like can GLCF models be just as easily fooled as GNNs remain largely unexplored. In this article, we propose to study the vulnerability of GLCF. Specifically, we first propose an adversarial attack against CLCF. Considering the unique challenges of attacking GLCF, we propose to adopt the greedy strategy in searching for the local optimal perturbations and design a reasonable attacking utility function to handle the non-differentiable ranking-oriented metrics. Next, we propose a defense to robustify GCLF. The defense is based on the observation that attacks usually introduce suspicious interactions into the graph to manipulate the message-passing process. We then propose to measure the suspicious score of each interaction and further reduce the message weight of suspicious interactions. We also give a theoretical guarantee of its robustness. Experimental results on three benchmark datasets show the effectiveness of both our attack and defense.
Original language | English (US) |
---|---|
Article number | 87 |
Journal | ACM Transactions on Information Systems |
Volume | 41 |
Issue number | 4 |
DOIs | |
State | Published - Mar 23 2023 |
Keywords
- Recommender system
- adversarial attack
- collaborative filtering
- defense
- graph neural network
ASJC Scopus subject areas
- Information Systems
- General Business, Management and Accounting
- Computer Science Applications