TY - GEN
T1 - MapPrior
T2 - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
AU - Zhu, Xiyue
AU - Zyrianov, Vlas
AU - Liu, Zhijian
AU - Wang, Shenlong
N1 - Acknowledgement. The project is partially funded by the Illinois Smart Transportation Initiative STII-21-07, an Amazon Research Award, an Intel Research Gift, and an IBM IIDAI grant. We also thank NVIDIA for the Academic Hardware Grant. VZ is supported by the Frontier Fellowship.
PY - 2023
Y1 - 2023
N2 - Despite tremendous advancements in bird's-eye view (BEV) perception, existing models fall short in generating realistic and coherent semantic map layouts, and they fail to account for uncertainties arising from partial sensor information (such as occlusion or limited coverage). In this work, we introduce MapPrior, a novel BEV perception framework that combines a traditional discriminative BEV perception model with a learned generative model for semantic map layouts. Our MapPrior delivers predictions with better accuracy, realism and uncertainty awareness. We evaluate our model on the large-scale nuScenes benchmark. At the time of submission, MapPrior outperforms the strongest competing method, with significantly improved MMD and ECE scores in camera- and LiDAR-based BEV perception. Furthermore, our method can be used to perpetually generate layouts with unconditional sampling.
AB - Despite tremendous advancements in bird's-eye view (BEV) perception, existing models fall short in generating realistic and coherent semantic map layouts, and they fail to account for uncertainties arising from partial sensor information (such as occlusion or limited coverage). In this work, we introduce MapPrior, a novel BEV perception framework that combines a traditional discriminative BEV perception model with a learned generative model for semantic map layouts. Our MapPrior delivers predictions with better accuracy, realism and uncertainty awareness. We evaluate our model on the large-scale nuScenes benchmark. At the time of submission, MapPrior outperforms the strongest competing method, with significantly improved MMD and ECE scores in camera- and LiDAR-based BEV perception. Furthermore, our method can be used to perpetually generate layouts with unconditional sampling.
UR - http://www.scopus.com/inward/record.url?scp=85185867492&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185867492&partnerID=8YFLogxK
U2 - 10.1109/ICCV51070.2023.00756
DO - 10.1109/ICCV51070.2023.00756
M3 - Conference contribution
AN - SCOPUS:85185867492
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 8194
EP - 8205
BT - Proceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 October 2023 through 6 October 2023
ER -