TY - GEN
T1 - PatchMatch-RL
T2 - 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
AU - Lee, Jae Yong
AU - DeGol, Joseph
AU - Zou, Chuhang
AU - Hoiem, Derek
N1 - Funding Information:
We thank ONR MURI Award N00014-16-1-2007 for support in our research.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Recent learning-based multi-view stereo (MVS) methods show excellent performance with dense cameras and small depth ranges. However, non-learning based approaches still outperform for scenes with large depth ranges and sparser wide-baseline views, in part due to their PatchMatch optimization over pixelwise estimates of depth, normals, and visibility. In this paper, we propose an end-to-end trainable PatchMatch-based MVS approach that combines advantages of trainable costs and regularizations with pixelwise estimates. To overcome the challenge of the non-differentiable PatchMatch optimization that involves iterative sampling and hard decisions, we use reinforcement learning to minimize expected photometric cost and maximize likelihood of ground truth depth and normals. We incorporate normal estimation by using dilated patch kernels and propose a recurrent cost regularization that applies beyond frontal plane-sweep algorithms to our pixelwise depth/normal estimates. We evaluate our method on widely used MVS benchmarks, ETH3D and Tanks and Temples (TnT). On ETH3D, our method outperforms other recent learning-based approaches and performs comparably on advanced TnT.
AB - Recent learning-based multi-view stereo (MVS) methods show excellent performance with dense cameras and small depth ranges. However, non-learning based approaches still outperform for scenes with large depth ranges and sparser wide-baseline views, in part due to their PatchMatch optimization over pixelwise estimates of depth, normals, and visibility. In this paper, we propose an end-to-end trainable PatchMatch-based MVS approach that combines advantages of trainable costs and regularizations with pixelwise estimates. To overcome the challenge of the non-differentiable PatchMatch optimization that involves iterative sampling and hard decisions, we use reinforcement learning to minimize expected photometric cost and maximize likelihood of ground truth depth and normals. We incorporate normal estimation by using dilated patch kernels and propose a recurrent cost regularization that applies beyond frontal plane-sweep algorithms to our pixelwise depth/normal estimates. We evaluate our method on widely used MVS benchmarks, ETH3D and Tanks and Temples (TnT). On ETH3D, our method outperforms other recent learning-based approaches and performs comparably on advanced TnT.
UR - http://www.scopus.com/inward/record.url?scp=85127799405&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127799405&partnerID=8YFLogxK
U2 - 10.1109/ICCV48922.2021.00610
DO - 10.1109/ICCV48922.2021.00610
M3 - Conference contribution
AN - SCOPUS:85127799405
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 6138
EP - 6147
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 October 2021 through 17 October 2021
ER -