TY - GEN
T1 - Element rearrangement for tensor-based subspace learning
AU - Yan, Shuicheng
AU - Xu, Dong
AU - Lin, Stephen
AU - Huang, Thomas S.
AU - Chang, Shih Fu
PY - 2007
Y1 - 2007
N2 - The success of tensor-based subspace learning depends heavily on reducing correlations along the column vectors of the mode-k flattened matrix. In this work, we study the problem of rearranging elements within a tensor in order to maximize these correlations, so that information redundancy in tensor data can be more extensively removed by existing tensor-based dimensionality reduction algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer optimization problem. In each step, the tensor structure is refined with a spatially-constrained Earth Mover's Distance procedure that incrementally rearranges tensors to become more similar to their low rank approximations, which have high correlation among features along certain tensor dimensions. Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. In addition, we present an extension of the algorithm for conducting supervised subspace learning with tensor data. Experiments in both unsupervised and supervised subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.
AB - The success of tensor-based subspace learning depends heavily on reducing correlations along the column vectors of the mode-k flattened matrix. In this work, we study the problem of rearranging elements within a tensor in order to maximize these correlations, so that information redundancy in tensor data can be more extensively removed by existing tensor-based dimensionality reduction algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer optimization problem. In each step, the tensor structure is refined with a spatially-constrained Earth Mover's Distance procedure that incrementally rearranges tensors to become more similar to their low rank approximations, which have high correlation among features along certain tensor dimensions. Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. In addition, we present an extension of the algorithm for conducting supervised subspace learning with tensor data. Experiments in both unsupervised and supervised subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.
UR - http://www.scopus.com/inward/record.url?scp=34948858491&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34948858491&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2007.382984
DO - 10.1109/CVPR.2007.382984
M3 - Conference contribution
AN - SCOPUS:34948858491
SN - 1424411807
SN - 9781424411801
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
BT - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
T2 - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
Y2 - 17 June 2007 through 22 June 2007
ER -