TY - JOUR
T1 - Rationalization for explainable NLP
T2 - a survey
AU - Gurrapu, Sai
AU - Kulkarni, Ajay
AU - Huang, Lifu
AU - Lourentzou, Ismini
AU - Batarseh, Feras A.
N1 - Publisher Copyright:
Copyright © 2023 Gurrapu, Kulkarni, Huang, Lourentzou and Batarseh.
PY - 2023
Y1 - 2023
N2 - Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
AB - Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
KW - abstractive rationale
KW - explainable NLP
KW - extractive rationale
KW - large language models
KW - natural language generation
KW - Natural Language Processing
KW - rationales
KW - rationalization
UR - http://www.scopus.com/inward/record.url?scp=85173755599&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85173755599&partnerID=8YFLogxK
U2 - 10.3389/frai.2023.1225093
DO - 10.3389/frai.2023.1225093
M3 - Review article
C2 - 37818431
AN - SCOPUS:85173755599
SN - 2624-8212
VL - 6
JO - Frontiers in Artificial Intelligence
JF - Frontiers in Artificial Intelligence
M1 - 1225093
ER -