TY - GEN
T1 - Revisiting Deformable Convolution for Depth Completion
AU - Sun, Xinglong
AU - Ponce, Jean
AU - Wang, Yu Xiong
N1 - ACKNOWLEDGMENT This work was supported in part by NSF Grant 2106825, NIFA Award 2020-67021-32799, the Jump ARCHES endowment, the NCSA Fellows program, the Inria/NYU collaboration, the Louis Vuitton/ENS chair on artificial intelligence and the French government under management of Agence Nationale de la Recherche as part of the Investissements d’avenir program, reference ANR19-P3IA0001 (PRAIRIE 3IA Institute). We thank Beomjun Kim and Bumsub Ham for providing their DKN code.
PY - 2023
Y1 - 2023
N2 - Depth completion, which aims to generate high-quality dense depth maps from sparse depth maps, has attracted increasing attention in recent years. Previous work usually employs RGB images as guidance, and introduces iterative spatial propagation to refine estimated coarse depth maps. However, most of the propagation refinement methods require several iterations and suffer from a fixed receptive field, which may contain irrelevant and useless information with very sparse input. In this paper, we address these two challenges simultaneously by revisiting the idea of deformable convolution. We propose an effective architecture that leverages deformable kernel convolution as a single-pass refinement module, and empirically demonstrate its superiority. To better understand the function of deformable convolution and exploit it for depth completion, we further systematically investigate a variety of representative strategies. Our study reveals that, different from prior work, deformable convolution needs to be applied on an estimated depth map with a relatively high density for better performance. We evaluate our model on the large-scale KITTI dataset and achieve state-of-the-art level performance in both accuracy and inference speed. Our code is available at https://github.com/AlexSunNiklReDC.
AB - Depth completion, which aims to generate high-quality dense depth maps from sparse depth maps, has attracted increasing attention in recent years. Previous work usually employs RGB images as guidance, and introduces iterative spatial propagation to refine estimated coarse depth maps. However, most of the propagation refinement methods require several iterations and suffer from a fixed receptive field, which may contain irrelevant and useless information with very sparse input. In this paper, we address these two challenges simultaneously by revisiting the idea of deformable convolution. We propose an effective architecture that leverages deformable kernel convolution as a single-pass refinement module, and empirically demonstrate its superiority. To better understand the function of deformable convolution and exploit it for depth completion, we further systematically investigate a variety of representative strategies. Our study reveals that, different from prior work, deformable convolution needs to be applied on an estimated depth map with a relatively high density for better performance. We evaluate our model on the large-scale KITTI dataset and achieve state-of-the-art level performance in both accuracy and inference speed. Our code is available at https://github.com/AlexSunNiklReDC.
UR - https://www.scopus.com/pages/publications/85182526041
UR - https://www.scopus.com/inward/citedby.url?scp=85182526041&partnerID=8YFLogxK
U2 - 10.1109/IROS55552.2023.10342026
DO - 10.1109/IROS55552.2023.10342026
M3 - Conference contribution
AN - SCOPUS:85182526041
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 1300
EP - 1306
BT - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
Y2 - 1 October 2023 through 5 October 2023
ER -