TY - GEN
T1 - JIGSAW
T2 - 2024 IEEE International Conference on Multimedia and Expo, ICME 2024
AU - Gokarn, Ila
AU - Hu, Yigong
AU - Abdelzaher, Tarek
AU - Misra, Archan
N1 - This work was supported by National Research Foundation, Prime Minister s Office, Singapore under both its NRF Investigatorship grant (NRF-NRFI05-2019-0007), and its Campus for Research Excellence and Technological Enterprise (CREATE) program. The Mens, Manus, and Machina (M3S) is an interdisciplinary research group (IRG) of the Singapore MIT Alliance for Research and Technology (SMART) centre. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. This work was also funded in part by DEVCOM ARL under Cooperative Agreement W911NF-17-2-0196 (ARL IoBT CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S.Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
PY - 2024
Y1 - 2024
N2 - We present JIGSAW, a novel system that performs edge-based streaming perception over multiple video streams, while additionally factoring in the redundancy offered by the spatial overlap often exhibited in urban, multi-camera deployments. To assure high streaming throughput, JIGSAW extracts and spatially multiplexes multiple regions-of-interest from different camera frames into a smaller canvas frame. Moreover, to ensure that perception stays abreast of evolving object kinematics, JIGSAW includes a utility-based weighted scheduler to preferentially prioritize and even skip object-specific tiles extracted from an incoming stream of camera frames. Using the CityflowV2 traffic surveillance dataset, we show that JIGSAW can simultaneously process 25 cameras on a single Jetson TX2 with a 66.6% increase in accuracy and a simultaneous 18x (1800%) gain in cumulative throughput (475 FPS), far outperforming competitive baselines.
AB - We present JIGSAW, a novel system that performs edge-based streaming perception over multiple video streams, while additionally factoring in the redundancy offered by the spatial overlap often exhibited in urban, multi-camera deployments. To assure high streaming throughput, JIGSAW extracts and spatially multiplexes multiple regions-of-interest from different camera frames into a smaller canvas frame. Moreover, to ensure that perception stays abreast of evolving object kinematics, JIGSAW includes a utility-based weighted scheduler to preferentially prioritize and even skip object-specific tiles extracted from an incoming stream of camera frames. Using the CityflowV2 traffic surveillance dataset, we show that JIGSAW can simultaneously process 25 cameras on a single Jetson TX2 with a 66.6% increase in accuracy and a simultaneous 18x (1800%) gain in cumulative throughput (475 FPS), far outperforming competitive baselines.
KW - Canvas-based Processing
KW - Edge AI
KW - Machine Perception
UR - http://www.scopus.com/inward/record.url?scp=85196162484&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85196162484&partnerID=8YFLogxK
U2 - 10.1109/ICME57554.2024.10688074
DO - 10.1109/ICME57554.2024.10688074
M3 - Conference contribution
AN - SCOPUS:85196162484
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2024 IEEE International Conference on Multimedia and Expo, ICME 2024
PB - IEEE Computer Society
Y2 - 15 July 2024 through 19 July 2024
ER -