TY - GEN
T1 - Representation Learning on Dynamic Network of Networks
AU - Zhang, Si
AU - Xia, Yinglong
AU - Zhu, Yan
AU - Tong, Hanghang
N1 - Publisher Copyright:
Copyright © 2023 by SIAM.
PY - 2023
Y1 - 2023
N2 - Network of networks (NoN) where each node of the main network represents a domain-specific network, is a powerful multi-network model to capture the relationships among entities at both coarse and fine granularities. Existing graph convolutional networks (GCN) learn node representations either on a single network or multiple networks while overlooking the relationships among different networks (e.g., main network structure). In addition, many real-world networks often evolve over time, which makes it imperative yet even more challenging to leverage temporal information for node representation learning. In this paper, we study the node representation learning problem on dynamic network of networks. The key idea of designing the static model is the predict-then-propagate strategy such that node representations are obtained by propagating the initial representations of common nodes which are shared across domain-specific networks. To leverage the temporal information underlying dynamic NoN, we extend the static model by a gated recurrent unit (GRU) to capture the dynamics behind cross-network consistency and a self-attention mechanism to learn the dependence of nodes on their historical representations. With these components, we propose an end-to-end model DraNoN to learn node representations on dynamic NoN. We conduct experiments on the dynamic network alignment task, which demonstrate the superior performance of DraNoN compared with the state-of-the-arts.
AB - Network of networks (NoN) where each node of the main network represents a domain-specific network, is a powerful multi-network model to capture the relationships among entities at both coarse and fine granularities. Existing graph convolutional networks (GCN) learn node representations either on a single network or multiple networks while overlooking the relationships among different networks (e.g., main network structure). In addition, many real-world networks often evolve over time, which makes it imperative yet even more challenging to leverage temporal information for node representation learning. In this paper, we study the node representation learning problem on dynamic network of networks. The key idea of designing the static model is the predict-then-propagate strategy such that node representations are obtained by propagating the initial representations of common nodes which are shared across domain-specific networks. To leverage the temporal information underlying dynamic NoN, we extend the static model by a gated recurrent unit (GRU) to capture the dynamics behind cross-network consistency and a self-attention mechanism to learn the dependence of nodes on their historical representations. With these components, we propose an end-to-end model DraNoN to learn node representations on dynamic NoN. We conduct experiments on the dynamic network alignment task, which demonstrate the superior performance of DraNoN compared with the state-of-the-arts.
UR - http://www.scopus.com/inward/record.url?scp=85180625617&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180625617&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85180625617
T3 - 2023 SIAM International Conference on Data Mining, SDM 2023
SP - 298
EP - 306
BT - 2023 SIAM International Conference on Data Mining, SDM 2023
PB - Society for Industrial and Applied Mathematics Publications
T2 - 2023 SIAM International Conference on Data Mining, SDM 2023
Y2 - 27 April 2023 through 29 April 2023
ER -