TY - GEN
T1 - Industry-track
T2 - 22nd ACM SIGBED International Conference on Embedded Software, EMSOFT 2022
AU - Abraham, Michael
AU - Mayne, Aaron
AU - Perez, Tristan
AU - De Oliveira, Italo Romani
AU - Yu, Huafeng
AU - Hsieh, Chiao
AU - Li, Yangge
AU - Sun, Dawei
AU - Mitra, Sayan
N1 - ACKNOWLEDGMENT The authors would like to acknowledge the constructive feedback from the anonymous reviewers. The Illinois researchers were supported by research grants from the National Science Foundation (Award number NSF-SHF-2008883) and from the Boeing Company.
PY - 2022
Y1 - 2022
N2 - Deep learning (DL) models are becoming effective in solving computer-vision tasks such as semantic segmentation, object tracking, and pose estimation on real-world captured images. Reliability analysis of autonomous systems that use these DL models as part of their perception systems have to account for the performance of these models. Autonomous systems with traditional sensors have tried-and-tested reliability assessment processes with modular design, unit tests, system integration, compositional verification, certification, etc. In contrast, DL perception modules relies on data-driven or learned models. These models do not capture uncertainty and often lack robustness. Also, these models are often updated throughout the lifecycle of the product when new data sets become available. However, the integration of an updated DL-based perception requires a reboot and start afresh of the reliability assessment and operation processes for autonomous systems. In this paper, we discuss three challenges related to specifying, verifying, and operating systems that incorporate DL-based perception. We illustrate these challenges through two concrete and open source examples.
AB - Deep learning (DL) models are becoming effective in solving computer-vision tasks such as semantic segmentation, object tracking, and pose estimation on real-world captured images. Reliability analysis of autonomous systems that use these DL models as part of their perception systems have to account for the performance of these models. Autonomous systems with traditional sensors have tried-and-tested reliability assessment processes with modular design, unit tests, system integration, compositional verification, certification, etc. In contrast, DL perception modules relies on data-driven or learned models. These models do not capture uncertainty and often lack robustness. Also, these models are often updated throughout the lifecycle of the product when new data sets become available. However, the integration of an updated DL-based perception requires a reboot and start afresh of the reliability assessment and operation processes for autonomous systems. In this paper, we discuss three challenges related to specifying, verifying, and operating systems that incorporate DL-based perception. We illustrate these challenges through two concrete and open source examples.
UR - http://www.scopus.com/inward/record.url?scp=85140202520&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140202520&partnerID=8YFLogxK
U2 - 10.1109/EMSOFT55006.2022.00016
DO - 10.1109/EMSOFT55006.2022.00016
M3 - Conference contribution
AN - SCOPUS:85140202520
T3 - Proceedings - International Conference on Embedded Software, EMSOFT 2022
SP - 17
EP - 20
BT - Proceedings - International Conference on Embedded Software, EMSOFT 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 7 October 2022 through 14 October 2022
ER -