TY - JOUR
T1 - Clustering Through Hybrid Network Architecture With Support Vectors
AU - Ergul, Emrah
AU - Arica, Nafiz
AU - Ahuja, Narendra
AU - Erturk, Sarp
N1 - Funding Information:
Manuscript received December 29, 2014; revised December 30, 2015 and February 21, 2016; accepted March 10, 2016. Date of publication March 29, 2016; date of current version May 15, 2017. This work was supported by the Scientific and Technological Research Council of Turkey under Grant 2214-B.14.2.TBT.0.06.01-214-83.
Publisher Copyright:
© 2016 IEEE.
PY - 2017/6
Y1 - 2017/6
N2 - In this paper, we propose a clustering algorithm based on a two-phased neural network architecture. We combine the strength of an autoencoderlike network for unsupervised representation learning with the discriminative power of a support vector machine (SVM) network for fine-tuning the initial clusters. The first network is referred as prototype encoding network, where the data reconstruction error is minimized in an unsupervised manner. The second phase, i.e., SVM network, endeavors to maximize the margin between cluster boundaries in a supervised way making use of the first output. Both the networks update the cluster centroids successively by establishing a topology preserving scheme like self-organizing map on the latent space of each network. Cluster fine-tuning is accomplished in a network structure by the alternate usage of the encoding part of both the networks. In the experiments, challenging data sets from two popular repositories with different patterns, dimensionality, and the number of clusters are used. The proposed hybrid architecture achieves comparatively better results both visually and analytically than the previous neural network-based approaches available in the literature.
AB - In this paper, we propose a clustering algorithm based on a two-phased neural network architecture. We combine the strength of an autoencoderlike network for unsupervised representation learning with the discriminative power of a support vector machine (SVM) network for fine-tuning the initial clusters. The first network is referred as prototype encoding network, where the data reconstruction error is minimized in an unsupervised manner. The second phase, i.e., SVM network, endeavors to maximize the margin between cluster boundaries in a supervised way making use of the first output. Both the networks update the cluster centroids successively by establishing a topology preserving scheme like self-organizing map on the latent space of each network. Cluster fine-tuning is accomplished in a network structure by the alternate usage of the encoding part of both the networks. In the experiments, challenging data sets from two popular repositories with different patterns, dimensionality, and the number of clusters are used. The proposed hybrid architecture achieves comparatively better results both visually and analytically than the previous neural network-based approaches available in the literature.
KW - Autoencoder (AE) network
KW - clustering neural networks
KW - greedy layerwise learning
KW - prototype encoding (PE) network
KW - support vector machine (SVM)
UR - http://www.scopus.com/inward/record.url?scp=85028316423&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028316423&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2016.2542059
DO - 10.1109/TNNLS.2016.2542059
M3 - Article
C2 - 28113825
AN - SCOPUS:85028316423
SN - 2162-237X
VL - 28
SP - 1373
EP - 1385
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 6
M1 - 7442845
ER -