TY - GEN
T1 - Evaluating Evaluation Metrics
T2 - 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023
AU - Xiao, Ziang
AU - Zhang, Susu
AU - Lai, Vivian
AU - Liao, Q. Vera
N1 - Publisher Copyright:
©2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - We address a fundamental challenge in Natural Language Generation (NLG) model evaluation-the design and evaluation of evaluation metrics. Recognizing the limitations of existing automatic metrics and noises from how current human evaluation was conducted, we propose METRICEVAL, a framework informed by measurement theory, the foundation of educational test design, for conceptualizing and evaluating the reliability and validity of NLG evaluation metrics. The framework formalizes the source of measurement error and offers statistical tools for evaluating evaluation metrics based on empirical data. With our framework, one can quantify the uncertainty of the metrics to better interpret the result. To exemplify the use of our framework in practice, we analyzed a set of evaluation metrics for summarization and identified issues related to conflated validity structure in human-eval and reliability in LLM-based metrics. Through METRICEVAL 1, we aim to promote the design, evaluation, and interpretation of valid and reliable metrics to advance robust and effective NLG models.
AB - We address a fundamental challenge in Natural Language Generation (NLG) model evaluation-the design and evaluation of evaluation metrics. Recognizing the limitations of existing automatic metrics and noises from how current human evaluation was conducted, we propose METRICEVAL, a framework informed by measurement theory, the foundation of educational test design, for conceptualizing and evaluating the reliability and validity of NLG evaluation metrics. The framework formalizes the source of measurement error and offers statistical tools for evaluating evaluation metrics based on empirical data. With our framework, one can quantify the uncertainty of the metrics to better interpret the result. To exemplify the use of our framework in practice, we analyzed a set of evaluation metrics for summarization and identified issues related to conflated validity structure in human-eval and reliability in LLM-based metrics. Through METRICEVAL 1, we aim to promote the design, evaluation, and interpretation of valid and reliable metrics to advance robust and effective NLG models.
UR - http://www.scopus.com/inward/record.url?scp=85184824102&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85184824102&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85184824102
T3 - EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
SP - 10967
EP - 10982
BT - EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
A2 - Bouamor, Houda
A2 - Pino, Juan
A2 - Bali, Kalika
PB - Association for Computational Linguistics (ACL)
Y2 - 6 December 2023 through 10 December 2023
ER -