TY - GEN
T1 - Personalized Jargon Identification for Enhanced Interdisciplinary Communication
AU - Guo, Yue
AU - Chang, Joseph Chee
AU - Antoniak, Maria
AU - Bransom, Erin
AU - Cohen, Trevor
AU - Wang, Lucy Lu
AU - August, Tal
N1 - We thank our participants and annotators as well as Bailey Kuehl for her valuable suggestions on annotation. We also thank our anonymous reviewers, the Semantic Scholar research team, as well as Raymond Fok, who read or provided feedback and assistance. This work was performed during an internship at AI2 and was supported in part by US National Library of Medicine [grant number R21LM013934].
PY - 2024
Y1 - 2024
N2 - Scientific jargon can confuse researchers when they read materials from other domains. Identifying and translating jargon for individual researchers could speed up research, but current methods of jargon identification mainly use corpus-level familiarity indicators rather than modeling researcher-specific needs, which can vary greatly based on each researcher’s background. We collect a dataset of over 10K term familiarity annotations from 11 computer science researchers for terms drawn from 100 paper abstracts. Analysis of this data reveals that jargon familiarity and information needs vary widely across annotators, even within the same sub-domain (e.g., NLP). We investigate features representing domain, subdomain, and individual knowledge to predict individual jargon familiarity. We compare supervised and prompt-based approaches, finding that prompt-based methods using information about the individual researcher (e.g., personal publications, self-defined subfield of research) yield the highest accuracy, though the task remains difficult and supervised approaches have lower false positive rates. This research offers insights into features and methods for the novel task of integrating personal data into scientific jargon identification.
AB - Scientific jargon can confuse researchers when they read materials from other domains. Identifying and translating jargon for individual researchers could speed up research, but current methods of jargon identification mainly use corpus-level familiarity indicators rather than modeling researcher-specific needs, which can vary greatly based on each researcher’s background. We collect a dataset of over 10K term familiarity annotations from 11 computer science researchers for terms drawn from 100 paper abstracts. Analysis of this data reveals that jargon familiarity and information needs vary widely across annotators, even within the same sub-domain (e.g., NLP). We investigate features representing domain, subdomain, and individual knowledge to predict individual jargon familiarity. We compare supervised and prompt-based approaches, finding that prompt-based methods using information about the individual researcher (e.g., personal publications, self-defined subfield of research) yield the highest accuracy, though the task remains difficult and supervised approaches have lower false positive rates. This research offers insights into features and methods for the novel task of integrating personal data into scientific jargon identification.
UR - http://www.scopus.com/inward/record.url?scp=85198846878&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85198846878&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.naacl-long.255
DO - 10.18653/v1/2024.naacl-long.255
M3 - Conference contribution
AN - SCOPUS:85198846878
T3 - Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
SP - 4535
EP - 4550
BT - Long Papers
A2 - Duh, Kevin
A2 - Gomez, Helena
A2 - Bethard, Steven
PB - Association for Computational Linguistics (ACL)
T2 - 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
Y2 - 16 June 2024 through 21 June 2024
ER -