TY - GEN
T1 - Inaudible voice commands
T2 - 15th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2018
AU - Roy, Nirupam
AU - Shen, Sheng
AU - Hassanieh, Haitham
AU - Choudhury, Romit Roy
N1 - Funding Information:
We sincerely thank our shepherd Prof. Shyamnath Gollakota and the anonymous reviewers for their valuable feedback. We are grateful to the Joan and Lalit Bahl Fellowship, Qualcomm, IBM, and NSF (award number: 1619313) for partially funding this research.
Funding Information:
We sincerely thank our shepherd Prof. Shyamnath Gol-lakota and the anonymous reviewers for their valuable feedback. We are grateful to the Joan and Lalit Bahl Fellowship, Qualcomm, IBM, and NSF (award number: 1619313) for partially funding this research.
PY - 2018
Y1 - 2018
N2 - Recent work has shown that inaudible signals (at ultrasound frequencies) can be designed in a way that they become audible to microphones. Designed well, this can empower an adversary to stand on the road and silently control Amazon Echo and Google Home-like devices in people's homes. A voice command like “Alexa, open the garage door” can be a serious threat. While recent work has demonstrated feasibility, two issues remain open: (1) The attacks can only be launched from within 5 ft of Amazon Echo, and increasing this range makes the attack audible. (2) There is no clear solution against these ultrasound attacks, since they exploit a recently discovered loophole in hardware non-linearity. This paper is an attempt to close both these gaps. We begin by developing an attack that achieves 25 ft range, limited by the power of our amplifier. We then develop a defense against this class of voice attacks that exploit non-linearity. Our core ideas emerge from a careful forensics on voice, i.e., finding indelible traces of nonlinearity in recorded voice signals. Our system, LipRead, demonstrates the inaudible attack in various conditions, followed by defenses that only require software changes to the microphone.
AB - Recent work has shown that inaudible signals (at ultrasound frequencies) can be designed in a way that they become audible to microphones. Designed well, this can empower an adversary to stand on the road and silently control Amazon Echo and Google Home-like devices in people's homes. A voice command like “Alexa, open the garage door” can be a serious threat. While recent work has demonstrated feasibility, two issues remain open: (1) The attacks can only be launched from within 5 ft of Amazon Echo, and increasing this range makes the attack audible. (2) There is no clear solution against these ultrasound attacks, since they exploit a recently discovered loophole in hardware non-linearity. This paper is an attempt to close both these gaps. We begin by developing an attack that achieves 25 ft range, limited by the power of our amplifier. We then develop a defense against this class of voice attacks that exploit non-linearity. Our core ideas emerge from a careful forensics on voice, i.e., finding indelible traces of nonlinearity in recorded voice signals. Our system, LipRead, demonstrates the inaudible attack in various conditions, followed by defenses that only require software changes to the microphone.
UR - http://www.scopus.com/inward/record.url?scp=85056502740&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056502740&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85056502740
T3 - Proceedings of the 15th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2018
SP - 547
EP - 560
BT - Proceedings of the 15th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2018
PB - USENIX Association
Y2 - 9 April 2018 through 11 April 2018
ER -