TY - JOUR
T1 - Blockwise coordinate descent schemes for efficient and effective dictionary learning
AU - Liu, Bao Di
AU - Wang, Yu Xiong
AU - Shen, Bin
AU - Li, Xue
AU - Zhang, Yu Jin
AU - Wang, Yan Jiang
N1 - Funding Information:
This paper is supported partly by the National Natural Science Foundation of China (Grant nos. 61402535 , 61271407 ), the Natural Science Foundation for Youths of Shandong Province, China (Grant no. ZR2014FQ001 ), Qingdao Science and Technology Project (No. 14-2-4-111-jch ), and the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China) (Grant no. 14CX02169A ).
Publisher Copyright:
© 2015 Elsevier B.V.
PY - 2016/2/20
Y1 - 2016/2/20
N2 - Sparse representation based dictionary learning, which is usually viewed as a method for rearranging the structure of the original data in order to make the energy compact over non-orthogonal and over-complete dictionary, is widely used in signal processing, pattern recognition, machine learning, statistics, and neuroscience. The current sparse representation framework decouples the optimization problem as two subproblems, i.e., alternate sparse coding and dictionary learning using different optimizers, treating elements in dictionary and codes separately. In this paper, we treat elements both in dictionary and codes homogenously. The original optimization is directly decoupled as several blockwise alternate subproblems rather than the above two. Hence, sparse coding and dictionary learning optimizations are unified together. More precisely, the variables involved in the optimization problem are partitioned into several suitable blocks with convexity preserved, making it possible to perform an exact blockwise coordinate descent. For each separable subproblem, based on the convexity and monotonic property of the parabolic function, a closed-form solution is obtained. The algorithm is thus simple, efficient, and effective. Experimental results show that our algorithm significantly accelerates the learning process. An application to image classification further demonstrates the efficiency of our proposed optimization strategy.
AB - Sparse representation based dictionary learning, which is usually viewed as a method for rearranging the structure of the original data in order to make the energy compact over non-orthogonal and over-complete dictionary, is widely used in signal processing, pattern recognition, machine learning, statistics, and neuroscience. The current sparse representation framework decouples the optimization problem as two subproblems, i.e., alternate sparse coding and dictionary learning using different optimizers, treating elements in dictionary and codes separately. In this paper, we treat elements both in dictionary and codes homogenously. The original optimization is directly decoupled as several blockwise alternate subproblems rather than the above two. Hence, sparse coding and dictionary learning optimizations are unified together. More precisely, the variables involved in the optimization problem are partitioned into several suitable blocks with convexity preserved, making it possible to perform an exact blockwise coordinate descent. For each separable subproblem, based on the convexity and monotonic property of the parabolic function, a closed-form solution is obtained. The algorithm is thus simple, efficient, and effective. Experimental results show that our algorithm significantly accelerates the learning process. An application to image classification further demonstrates the efficiency of our proposed optimization strategy.
KW - Coordinate descent
KW - Dictionary learning
KW - Image classification
KW - Sparse representation
UR - http://www.scopus.com/inward/record.url?scp=84957727773&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84957727773&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2015.06.096
DO - 10.1016/j.neucom.2015.06.096
M3 - Article
AN - SCOPUS:84957727773
SN - 0925-2312
VL - 178
SP - 25
EP - 35
JO - Neurocomputing
JF - Neurocomputing
ER -