Abstract
Proof-of-Learning (PoL) proposes that a model owner logs training checkpoints to establish a proof of having expended the computation necessary for training. The authors of PoL forego cryptographic approaches and trade rigorous security guarantees for scalability to deep learning. They empirically argued the benefit of this approach by showing how spoofing - computing a proof for a stolen model - is as expensive as obtaining the proof honestly by training the model. However, recent work has provided a counter-example and thus has invalidated this observation.In this work we demonstrate, first, that while it is true that current PoL verification is not robust to adversaries, recent work has largely underestimated this lack of robustness. This is because existing spoofing strategies are either unreproducible or target weakened instantiations of PoL - meaning they are easily thwarted by changing hyperparameters of the verification. Instead, we introduce the first spoofing strategies that can be reproduced across different configurations of the PoL verification and can be done for a fraction of the cost of previous spoofing strategies. This is possible because we identify key vulnerabilities of PoL and systematically analyze the underlying assumptions needed for robust verification of a proof. On the theoretical side, we show how realizing these assumptions reduces to open problems in learning theory. We conclude that one cannot develop a provably robust PoL verification mechanism without further understanding of optimization in deep learning.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 8th IEEE European Symposium on Security and Privacy, Euro S and P 2023 |
Place of Publication | Los Alamitos |
Publisher | IEEE Computer Society |
Pages | 797-816 |
Number of pages | 20 |
ISBN (Electronic) | 9781665465120 |
DOIs | |
State | Published - Jul 1 2023 |
Externally published | Yes |