TY - GEN
T1 - Multi-task Knowledge Graph Representations via Residual Functions
AU - Krishnan, Adit
AU - Das, Mahashweta
AU - Bendre, Mangesh
AU - Wang, Fei
AU - Yang, Hao
AU - Sundaram, Hari
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - In this paper, we propose MuTATE, a Multi-Task Augmented approach to learn Transferable Embeddings of knowledge graphs. Previous knowledge graph representation techniques either employ task-agnostic geometric hypotheses to learn informative node embeddings or integrate task-specific learning objectives like attribute prediction. In contrast, our framework unifies multiple co-dependent learning objectives with knowledge graph enrichment. We define co-dependence as multiple tasks that extract covariant distributions of entities and their relationships for prediction or regression objectives. We facilitate knowledge transfer in this setting: tasks → graph, graph → tasks, and task-1 → task-2 via task-specific residual functions to specialize the node embeddings for each task, motivated by domain-shift theory. We show 5% relative gains over state-of-the-art knowledge graph embedding baselines on two public multi-task datasets and show significant potential for cross-task learning.
AB - In this paper, we propose MuTATE, a Multi-Task Augmented approach to learn Transferable Embeddings of knowledge graphs. Previous knowledge graph representation techniques either employ task-agnostic geometric hypotheses to learn informative node embeddings or integrate task-specific learning objectives like attribute prediction. In contrast, our framework unifies multiple co-dependent learning objectives with knowledge graph enrichment. We define co-dependence as multiple tasks that extract covariant distributions of entities and their relationships for prediction or regression objectives. We facilitate knowledge transfer in this setting: tasks → graph, graph → tasks, and task-1 → task-2 via task-specific residual functions to specialize the node embeddings for each task, motivated by domain-shift theory. We show 5% relative gains over state-of-the-art knowledge graph embedding baselines on two public multi-task datasets and show significant potential for cross-task learning.
KW - Graph neural networks
KW - Knowledge graph embedding
KW - Knowledge graphs
KW - Multi-task learning
KW - Residual learning
UR - http://www.scopus.com/inward/record.url?scp=85130347193&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130347193&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-05933-9_21
DO - 10.1007/978-3-031-05933-9_21
M3 - Conference contribution
AN - SCOPUS:85130347193
SN - 9783031059322
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 262
EP - 275
BT - Advances in Knowledge Discovery and Data Mining - 26th Pacific-Asia Conference, PAKDD 2022, Proceedings
A2 - Gama, João
A2 - Li, Tianrui
A2 - Yu, Yang
A2 - Chen, Enhong
A2 - Zheng, Yu
A2 - Teng, Fei
PB - Springer
T2 - 26th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2022
Y2 - 16 May 2022 through 19 May 2022
ER -