Adversarial atacks on an oblivious recommender

Konstantina Christakopoulou, Arindam Banerjee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We ofer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

Original languageEnglish (US)
Title of host publicationRecSys 2019 - 13th ACM Conference on Recommender Systems
PublisherAssociation for Computing Machinery
Pages322-330
Number of pages9
ISBN (Electronic)9781450362436
DOIs
StatePublished - Sep 10 2019
Externally publishedYes
Event13th ACM Conference on Recommender Systems, RecSys 2019 - Copenhagen, Denmark
Duration: Sep 16 2019Sep 20 2019

Publication series

NameRecSys 2019 - 13th ACM Conference on Recommender Systems

Conference

Conference13th ACM Conference on Recommender Systems, RecSys 2019
Country/TerritoryDenmark
CityCopenhagen
Period9/16/199/20/19

Keywords

  • Learned Adversarial Attacks
  • Recommender Systems

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Information Systems
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Adversarial atacks on an oblivious recommender'. Together they form a unique fingerprint.

Cite this