TY - GEN
T1 - Self-Cueing Real-Time Attention Scheduling in Criticality-Aware Visual Machine Perception
AU - Liu, Shengzhong
AU - Fu, Xinzhe
AU - Wigness, Maggie
AU - David, Philip
AU - Yao, Shuochao
AU - Sha, Lui
AU - Abdelzaher, Tarek
N1 - ACKNOWLEDGEMENT Research reported in this paper was sponsored in part by DARPA award W911NF-17-C-0099, the Army Research Laboratory under Cooperative Agreement W911NF-17-20196, NSF CNS 20-38817. The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies of the CCDC Army Research Laboratory, DARPA, or the US government. The US government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation hereon.
PY - 2022
Y1 - 2022
N2 - This paper presents a self-cueing real-time frame-work for attention prioritization in AI-enabled visual perception systems that minimizes a notion of state uncertainty. By attention prioritization we refer to inspecting some parts of the scene before others in a criticality-aware fashion. By self-cueing, we refer to not needing external cueing sensors for prioritizing attention, thereby simplifying design. We show that attention prioritization saves resources, thus enabling more efficient and responsive real-time object tracking on resource-limited embedded platforms. The system consists of two components: First, an optical flow-based module decides on the regions to be viewed on a subframe level, as well as their criticality. Second, a novel batched proportional balancing (BPB) scheduling policy decides how to schedule these regions for inspection by a deep neural network (DNN), and how to parallelize execution on the GPU. We implement the system on an NVIDIA Jetson Xavier platform, and empirically demonstrate the superiority of the proposed architecture through an extensive evaluation using a real-word driving dataset.
AB - This paper presents a self-cueing real-time frame-work for attention prioritization in AI-enabled visual perception systems that minimizes a notion of state uncertainty. By attention prioritization we refer to inspecting some parts of the scene before others in a criticality-aware fashion. By self-cueing, we refer to not needing external cueing sensors for prioritizing attention, thereby simplifying design. We show that attention prioritization saves resources, thus enabling more efficient and responsive real-time object tracking on resource-limited embedded platforms. The system consists of two components: First, an optical flow-based module decides on the regions to be viewed on a subframe level, as well as their criticality. Second, a novel batched proportional balancing (BPB) scheduling policy decides how to schedule these regions for inspection by a deep neural network (DNN), and how to parallelize execution on the GPU. We implement the system on an NVIDIA Jetson Xavier platform, and empirically demonstrate the superiority of the proposed architecture through an extensive evaluation using a real-word driving dataset.
UR - http://www.scopus.com/inward/record.url?scp=85133713682&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133713682&partnerID=8YFLogxK
U2 - 10.1109/RTAS54340.2022.00022
DO - 10.1109/RTAS54340.2022.00022
M3 - Conference contribution
AN - SCOPUS:85133713682
T3 - Proceedings of the IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS
SP - 173
EP - 186
BT - Proceedings - 28th IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 28th IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2022
Y2 - 4 May 2022 through 6 May 2022
ER -