TY - GEN
T1 - Sensor training data reduction for autonomous vehicles
AU - Tomei, Matthew
AU - Schwing, Alexander
AU - Narayanasamy, Satish
AU - Kumar, Rakesh
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/10/7
Y1 - 2019/10/7
N2 - Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).
AB - Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).
UR - http://www.scopus.com/inward/record.url?scp=85076437778&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076437778&partnerID=8YFLogxK
U2 - 10.1145/3349614.3356028
DO - 10.1145/3349614.3356028
M3 - Conference contribution
AN - SCOPUS:85076437778
T3 - Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM
SP - 45
EP - 50
BT - HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019
PB - Association for Computing Machinery
T2 - 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges,HotEdgeVideo 2019, co-located with MobiCom 2019
Y2 - 21 October 2019
ER -