TY - GEN
T1 - Curriculum learning for heterogeneous star network embedding via deep reinforcement learning
AU - Qu, Meng
AU - Tang, Jian
AU - Han, Jiawei
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/2/2
Y1 - 2018/2/2
N2 - Learning node representations for networks has attracted much attention recently due to its effectiveness in a variety of applications. This paper focuses on learning node representations for heterogeneous star networks, which have a center node type linked with multiple attribute node types through different types of edges. In heterogeneous star networks, we observe that the training order of different types of edges affects the learning performance significantly. Therefore we study learning curricula for node representation learning in heterogeneous star networks, i.e., learning an optimal sequence of edges of different types for the node representation learning process. We formulate the problem as a Markov decision process, with the action as selecting a specific type of edges for learning or terminating the training process, and the state as the sequence of edge types selected so far. The reward is calculated as the performance on external tasks with node representations as features, and the goal is to take a series of actions to maximize the cumulative rewards. We propose an approach based on deep reinforcement learning for this problem. Our approach leverages LSTM models to encode states and further estimate the expected cumulative reward of each state-action pair, which essentially measures the long-term performance of different actions at each state. Experimental results on real-world heterogeneous star networks demonstrate the effectiveness and efficiency of our approach over competitive baseline approaches.
AB - Learning node representations for networks has attracted much attention recently due to its effectiveness in a variety of applications. This paper focuses on learning node representations for heterogeneous star networks, which have a center node type linked with multiple attribute node types through different types of edges. In heterogeneous star networks, we observe that the training order of different types of edges affects the learning performance significantly. Therefore we study learning curricula for node representation learning in heterogeneous star networks, i.e., learning an optimal sequence of edges of different types for the node representation learning process. We formulate the problem as a Markov decision process, with the action as selecting a specific type of edges for learning or terminating the training process, and the state as the sequence of edge types selected so far. The reward is calculated as the performance on external tasks with node representations as features, and the goal is to take a series of actions to maximize the cumulative rewards. We propose an approach based on deep reinforcement learning for this problem. Our approach leverages LSTM models to encode states and further estimate the expected cumulative reward of each state-action pair, which essentially measures the long-term performance of different actions at each state. Experimental results on real-world heterogeneous star networks demonstrate the effectiveness and efficiency of our approach over competitive baseline approaches.
UR - http://www.scopus.com/inward/record.url?scp=85046907621&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046907621&partnerID=8YFLogxK
U2 - 10.1145/3159652.3159711
DO - 10.1145/3159652.3159711
M3 - Conference contribution
AN - SCOPUS:85046907621
T3 - WSDM 2018 - Proceedings of the 11th ACM International Conference on Web Search and Data Mining
SP - 468
EP - 476
BT - WSDM 2018 - Proceedings of the 11th ACM International Conference on Web Search and Data Mining
PB - Association for Computing Machinery
T2 - 11th ACM International Conference on Web Search and Data Mining, WSDM 2018
Y2 - 5 February 2018 through 9 February 2018
ER -