TY - GEN
T1 - Deep joint image filtering
AU - Li, Yijun
AU - Huang, Jia Bin
AU - Ahuja, Narendra
AU - Yang, Ming Hsuan
N1 - This work is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and Nvidia, and Office of Naval Research under grant N00014-16-1-2314.
PY - 2016
Y1 - 2016
N2 - Joint image filters can leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods rely on various kinds of explicit filter construction or handdesigned objective functions. It is thus difficult to understand, improve, and accelerate them in a coherent framework. In this paper, we propose a learning-based approach to construct a joint filter based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, our method can selectively transfer salient structures that are consistent in both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well for other modalities, e.g., Flash/Non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive comparisons with state-of-the-art methods.
AB - Joint image filters can leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods rely on various kinds of explicit filter construction or handdesigned objective functions. It is thus difficult to understand, improve, and accelerate them in a coherent framework. In this paper, we propose a learning-based approach to construct a joint filter based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, our method can selectively transfer salient structures that are consistent in both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well for other modalities, e.g., Flash/Non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive comparisons with state-of-the-art methods.
KW - Deep convolutional neural networks
KW - Joint filtering
UR - http://www.scopus.com/inward/record.url?scp=84990053231&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84990053231&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46493-0_10
DO - 10.1007/978-3-319-46493-0_10
M3 - Conference contribution
AN - SCOPUS:84990053231
SN - 9783319464923
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 154
EP - 169
BT - Computer Vision - 14th European Conference, ECCV 2016, Proceedings
A2 - Leibe, Bastian
A2 - Matas, Jiri
A2 - Sebe, Nicu
A2 - Welling, Max
PB - Springer
T2 - 14th European Conference on Computer Vision, ECCV 2016
Y2 - 11 October 2016 through 14 October 2016
ER -