TY - GEN
T1 - Sparse projections over graph
AU - Cai, Deng
AU - He, Xiaofei
AU - Han, Jiawei
PY - 2008
Y1 - 2008
N2 - Recent study has shown that canonical algorithms such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) can be obtained from graph based dimensionality reduction framework. However, these algorithms yield projective maps which are linear combination of all the original features. The results are difficult to be interpreted psychologically and physiologically. This paper presents a novel technique for learning a sparse projection over graphs. The data in the reduced subspace is represented as a linear combination of a subset of the most relevant features. Comparing to PCA and LDA, the results obtained by sparse projection are often easier to be interpreted. Our algorithm is based on a graph embedding model, which encodes the discriminating and geometrical structure in terms of the data affinity. Once the embedding results are obtained, we then apply regularized regression for learning a set of sparse basis functions. Specifically, by using a L 1-norm regularizer (e.g. lasso), the sparse projections can be efficiently computed. Experimental results on two document databases demonstrate the effectiveness of our method.
AB - Recent study has shown that canonical algorithms such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) can be obtained from graph based dimensionality reduction framework. However, these algorithms yield projective maps which are linear combination of all the original features. The results are difficult to be interpreted psychologically and physiologically. This paper presents a novel technique for learning a sparse projection over graphs. The data in the reduced subspace is represented as a linear combination of a subset of the most relevant features. Comparing to PCA and LDA, the results obtained by sparse projection are often easier to be interpreted. Our algorithm is based on a graph embedding model, which encodes the discriminating and geometrical structure in terms of the data affinity. Once the embedding results are obtained, we then apply regularized regression for learning a set of sparse basis functions. Specifically, by using a L 1-norm regularizer (e.g. lasso), the sparse projections can be efficiently computed. Experimental results on two document databases demonstrate the effectiveness of our method.
UR - http://www.scopus.com/inward/record.url?scp=57749180616&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=57749180616&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:57749180616
SN - 9781577353683
T3 - Proceedings of the National Conference on Artificial Intelligence
SP - 610
EP - 615
BT - AAAI-08/IAAI-08 Proceedings - 23rd AAAI Conference on Artificial Intelligence and the 20th Innovative Applications of Artificial Intelligence Conference
T2 - 23rd AAAI Conference on Artificial Intelligence and the 20th Innovative Applications of Artificial Intelligence Conference, AAAI-08/IAAI-08
Y2 - 13 July 2008 through 17 July 2008
ER -