TY - JOUR
T1 - New Risk Bounds for 2D Total Variation Denoising
AU - Chatterjee, Sabyasachi
AU - Goswami, Subhajit
N1 - Funding Information:
Manuscript received October 12, 2019; revised January 22, 2021; accepted January 30, 2021. Date of publication February 16, 2021; date of current version May 20, 2021. This work was supported in part by the NSF grant and in part by the IDEX grant from Paris-Saclay. (Corresponding author: Sabyasachi Chatterjee.) Sabyasachi Chatterjee is with the Department of Statistics, University of Illinois at Urbana-Champaign, Champaign, IL 61820 USA (e-mail: [email protected]).
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2021/6
Y1 - 2021/6
N2 - 2D Total Variation Denoising (TVD) is a widely used technique for image denoising. It is also an important nonparametric regression method for estimating functions with heterogenous smoothness. Recent results have shown the TVD estimator to be nearly minimax rate optimal for the class of functions with bounded variation. In this paper, we complement these worst case guarantees by investigating the adaptivity of the TVD estimator to functions which are piecewise constant on axis aligned rectangles. We rigorously show that, when the truth is piecewise constant with few pieces, the ideally tuned TVD estimator performs better than in the worst case. We also study the issue of choosing the tuning parameter. In particular, we propose a fully data driven version of the TVD estimator which enjoys similar worst case risk guarantees as the ideally tuned TVD estimator.
AB - 2D Total Variation Denoising (TVD) is a widely used technique for image denoising. It is also an important nonparametric regression method for estimating functions with heterogenous smoothness. Recent results have shown the TVD estimator to be nearly minimax rate optimal for the class of functions with bounded variation. In this paper, we complement these worst case guarantees by investigating the adaptivity of the TVD estimator to functions which are piecewise constant on axis aligned rectangles. We rigorously show that, when the truth is piecewise constant with few pieces, the ideally tuned TVD estimator performs better than in the worst case. We also study the issue of choosing the tuning parameter. In particular, we propose a fully data driven version of the TVD estimator which enjoys similar worst case risk guarantees as the ideally tuned TVD estimator.
KW - Nonparametric regression
KW - estimation of piecewise constant functions
KW - gaussian width
KW - recursive partitioning
KW - tangent cone
KW - total variation denoising
KW - tuning free estimation
UR - http://www.scopus.com/inward/record.url?scp=85101193832&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85101193832&partnerID=8YFLogxK
U2 - 10.1109/TIT.2021.3059657
DO - 10.1109/TIT.2021.3059657
M3 - Article
AN - SCOPUS:85101193832
SN - 0018-9448
VL - 67
SP - 4060
EP - 4091
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 6
M1 - 9354825
ER -