TY - GEN
T1 - Symmetric multi-view stereo reconstruction from planar camera arrays
AU - Maitre, Matthieu
AU - Shinagawa, Yoshihisa
AU - Do, Minh N.
PY - 2008/9/23
Y1 - 2008/9/23
N2 - We present a novel stereo algorithm which performs surface reconstruction from planar camera arrays. It incorporates the merits of both generic camera arrays and rectified binocular setups, recovering large surfaces like the former and performing efficient computations like the latter. First, we introduce a rectification algorithm which gives freedom in the design of camera arrays and simplifies photometric and geometric computations. We then define a novel set of data-fusion functions over 4-neighborhoods of cameras, which treat all cameras symmetrically and enable standard binocular stereo algorithms to handle arrays with arbitrary number of cameras. In particular, we introduce a photometric fusion function which handles partial visibility and extracts depth information along both horizontal and vertical baselines. Finally, we show that layered depth images and sprites with depth can be efficiently extracted from the rectified 3D space. Experimental results on real images confirm the effectiveness of the proposed method, which reconstructs dense surfaces larger by 20% on Tsukuba.
AB - We present a novel stereo algorithm which performs surface reconstruction from planar camera arrays. It incorporates the merits of both generic camera arrays and rectified binocular setups, recovering large surfaces like the former and performing efficient computations like the latter. First, we introduce a rectification algorithm which gives freedom in the design of camera arrays and simplifies photometric and geometric computations. We then define a novel set of data-fusion functions over 4-neighborhoods of cameras, which treat all cameras symmetrically and enable standard binocular stereo algorithms to handle arrays with arbitrary number of cameras. In particular, we introduce a photometric fusion function which handles partial visibility and extracts depth information along both horizontal and vertical baselines. Finally, we show that layered depth images and sprites with depth can be efficiently extracted from the rectified 3D space. Experimental results on real images confirm the effectiveness of the proposed method, which reconstructs dense surfaces larger by 20% on Tsukuba.
UR - http://www.scopus.com/inward/record.url?scp=51949117630&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51949117630&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2008.4587425
DO - 10.1109/CVPR.2008.4587425
M3 - Conference contribution
AN - SCOPUS:51949117630
SN - 9781424422432
T3 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
BT - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
T2 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Y2 - 23 June 2008 through 28 June 2008
ER -