TY - GEN
T1 - Robustifying network protocols with adversarial examples
AU - Gilad, Tomer
AU - Jay, Nathan H.
AU - Shnaiderman, Michael
AU - Godfrey, Brighten
AU - Schapira, Michael
N1 - We thank the Israel Science Foundation and Huawei for support of our work.
PY - 2019/11/13
Y1 - 2019/11/13
N2 - Ideally, network protocols (e.g., for routing, congestion control, video streaming, etc.) will perform well across the entire range of environments in which they might operate. Unfortunately, this is typically not the case; a protocol might fail to achieve good performance when network conditions deviate from assumptions implicitly or explicitly underlying its design, or due to specific implementation choices. Identifying exact conditions in which a specific protocol fares badly (though good performance is feasible to attain) is, however, not always easy as the reasons for protocol suboptimality or misbehavior might be elusive. We make two contributions: (1) We present a novel framework that leverages reinforcement learning (RL) to generate network conditions in which a given protocol fails to perform well. Our framework can be used to assess the robustness of a given protocol and to guide changes to the protocol for making it more robust. (2) We show how our framework for generating adversarial network conditions can be used to enhance the robustness of RL-driven network protocols, which have gained substantial popularity of late. We demonstrate the usefulness of our approach in two contexts: adaptive video streaming and Internet congestion control.
AB - Ideally, network protocols (e.g., for routing, congestion control, video streaming, etc.) will perform well across the entire range of environments in which they might operate. Unfortunately, this is typically not the case; a protocol might fail to achieve good performance when network conditions deviate from assumptions implicitly or explicitly underlying its design, or due to specific implementation choices. Identifying exact conditions in which a specific protocol fares badly (though good performance is feasible to attain) is, however, not always easy as the reasons for protocol suboptimality or misbehavior might be elusive. We make two contributions: (1) We present a novel framework that leverages reinforcement learning (RL) to generate network conditions in which a given protocol fails to perform well. Our framework can be used to assess the robustness of a given protocol and to guide changes to the protocol for making it more robust. (2) We show how our framework for generating adversarial network conditions can be used to enhance the robustness of RL-driven network protocols, which have gained substantial popularity of late. We demonstrate the usefulness of our approach in two contexts: adaptive video streaming and Internet congestion control.
UR - http://www.scopus.com/inward/record.url?scp=85077264787&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077264787&partnerID=8YFLogxK
U2 - 10.1145/3365609.3365862
DO - 10.1145/3365609.3365862
M3 - Conference contribution
AN - SCOPUS:85077264787
T3 - HotNets 2019 - Proceedings of the 18th ACM Workshop on Hot Topics in Networks
SP - 85
EP - 92
BT - HotNets 2019 - Proceedings of the 18th ACM Workshop on Hot Topics in Networks
PB - Association for Computing Machinery
T2 - 18th ACM Workshop on Hot Topics in Networks, HotNets 2019
Y2 - 14 November 2019 through 15 November 2019
ER -