TY - JOUR
T1 - Seed-Guided Fine-Grained Entity Typing in Science and Engineering Domains
AU - Zhang, Yu
AU - Zhang, Yunyi
AU - Shen, Yanzhen
AU - Deng, Yu
AU - Popa, Lucian
AU - Shwartz, Larisa
AU - Zhai, Cheng Xiang
AU - Han, Jiawei
N1 - Publisher Copyright:
© 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2024/3/25
Y1 - 2024/3/25
N2 - Accurately typing entity mentions from text segments is a fundamental task for various natural language processing applications. Many previous approaches rely on massive human-annotated data to perform entity typing. Nevertheless, collecting such data in highly specialized science and engineering domains (e.g., software engineering and security) can be time-consuming and costly, without mentioning the domain gaps between training and inference data if the model needs to be applied to confidential datasets. In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i.e., those without seed entities). To solve this problem, we propose SETYPE which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus using the contextualized representations of pre-trained language models. It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types. Extensive experiments on two datasets covering four domains demonstrate the effectiveness of SETYPE in comparison with various baselines. Code and data are available at: https://github.com/yuzhimanhua/SEType.
AB - Accurately typing entity mentions from text segments is a fundamental task for various natural language processing applications. Many previous approaches rely on massive human-annotated data to perform entity typing. Nevertheless, collecting such data in highly specialized science and engineering domains (e.g., software engineering and security) can be time-consuming and costly, without mentioning the domain gaps between training and inference data if the model needs to be applied to confidential datasets. In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i.e., those without seed entities). To solve this problem, we propose SETYPE which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus using the contextualized representations of pre-trained language models. It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types. Extensive experiments on two datasets covering four domains demonstrate the effectiveness of SETYPE in comparison with various baselines. Code and data are available at: https://github.com/yuzhimanhua/SEType.
UR - http://www.scopus.com/inward/record.url?scp=85189650495&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85189650495&partnerID=8YFLogxK
U2 - 10.1609/aaai.v38i17.29933
DO - 10.1609/aaai.v38i17.29933
M3 - Conference article
AN - SCOPUS:85189650495
SN - 2159-5399
VL - 38
SP - 19606
EP - 19614
JO - Proceedings of the AAAI Conference on Artificial Intelligence
JF - Proceedings of the AAAI Conference on Artificial Intelligence
IS - 17
T2 - 38th AAAI Conference on Artificial Intelligence, AAAI 2024
Y2 - 20 February 2024 through 27 February 2024
ER -