Sensor training data reduction for autonomous vehicles

Matthew Tomei, Alexander Schwing, Satish Narayanasamy, Rakesh Kumar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).

Original languageEnglish (US)
Title of host publicationHotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019
PublisherAssociation for Computing Machinery
Pages45-50
Number of pages6
ISBN (Electronic)9781450369282
DOIs
StatePublished - Oct 7 2019
Event2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges,HotEdgeVideo 2019, co-located with MobiCom 2019 - Los Cabos, Mexico
Duration: Oct 21 2019 → …

Publication series

NameProceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM

Conference

Conference2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges,HotEdgeVideo 2019, co-located with MobiCom 2019
CountryMexico
CityLos Cabos
Period10/21/19 → …

Fingerprint

Data reduction
Sensors
Cameras
Sampling

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Software

Cite this

Tomei, M., Schwing, A., Narayanasamy, S., & Kumar, R. (2019). Sensor training data reduction for autonomous vehicles. In HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019 (pp. 45-50). (Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM). Association for Computing Machinery. https://doi.org/10.1145/3349614.3356028

Sensor training data reduction for autonomous vehicles. / Tomei, Matthew; Schwing, Alexander; Narayanasamy, Satish; Kumar, Rakesh.

HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019. Association for Computing Machinery, 2019. p. 45-50 (Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tomei, M, Schwing, A, Narayanasamy, S & Kumar, R 2019, Sensor training data reduction for autonomous vehicles. in HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019. Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM, Association for Computing Machinery, pp. 45-50, 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges,HotEdgeVideo 2019, co-located with MobiCom 2019, Los Cabos, Mexico, 10/21/19. https://doi.org/10.1145/3349614.3356028
Tomei M, Schwing A, Narayanasamy S, Kumar R. Sensor training data reduction for autonomous vehicles. In HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019. Association for Computing Machinery. 2019. p. 45-50. (Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM). https://doi.org/10.1145/3349614.3356028
Tomei, Matthew ; Schwing, Alexander ; Narayanasamy, Satish ; Kumar, Rakesh. / Sensor training data reduction for autonomous vehicles. HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019. Association for Computing Machinery, 2019. pp. 45-50 (Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM).
@inproceedings{fed25f3d95ef44329eb511f43268d2b7,
title = "Sensor training data reduction for autonomous vehicles",
abstract = "Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).",
author = "Matthew Tomei and Alexander Schwing and Satish Narayanasamy and Rakesh Kumar",
year = "2019",
month = "10",
day = "7",
doi = "10.1145/3349614.3356028",
language = "English (US)",
series = "Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM",
publisher = "Association for Computing Machinery",
pages = "45--50",
booktitle = "HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019",

}

TY - GEN

T1 - Sensor training data reduction for autonomous vehicles

AU - Tomei, Matthew

AU - Schwing, Alexander

AU - Narayanasamy, Satish

AU - Kumar, Rakesh

PY - 2019/10/7

Y1 - 2019/10/7

N2 - Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).

AB - Autonomous vehicles requires good learning models which, in turn, require a large amount of real-world sensor training data. Unfortunately, the staggering volume of data produced by in-vehicle sensors, especially the cameras, make both local storage and transmission of this data to the cloud for training prohibitively expensive. In this work, we explore techniques for reducing video frames in a way that the quality of training for autonomous vehicles is minimally affected. We particularly focus on utility aware data reduction schemes where the potential contribution of a video frame to enhancing the quality of learning (or utility) is explicitly considered during data reduction. Since actual utility of a video frame cannot be computed online, we use surrogate utility metrics to decide what video frames to keep for training and which ones to discard. Our results show that utility-aware data reduction schemes can reduce the amount of camera data required for training by as much as 16× compared to random sampling for the same quality of learning (in terms of IoU).

UR - http://www.scopus.com/inward/record.url?scp=85076437778&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85076437778&partnerID=8YFLogxK

U2 - 10.1145/3349614.3356028

DO - 10.1145/3349614.3356028

M3 - Conference contribution

AN - SCOPUS:85076437778

T3 - Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM

SP - 45

EP - 50

BT - HotEdgeVideo 2019 - Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, co-located with MobiCom 2019

PB - Association for Computing Machinery

ER -