TY - GEN
T1 - Faking Fake News for Real Fake News Detection
T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
AU - Huang, Kung Hsiang
AU - McKeown, Kathleen
AU - Nakov, Preslav
AU - Choi, Yejin
AU - Ji, Heng
N1 - This research is based upon work supported by U.S. DARPA SemaFor Program No. HR001120C0123 and DARPA MIPs Program No. HR00112290105. The views and the conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and to distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
PY - 2023
Y1 - 2023
N2 - Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as appeal to authority and loaded language. In particular, we create a new training dataset, PROPANEWS, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PROPANEWS are better at detecting human-written disinformation by 3.62-7.69% F1 score on two public datasets.
AB - Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as appeal to authority and loaded language. In particular, we create a new training dataset, PROPANEWS, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PROPANEWS are better at detecting human-written disinformation by 3.62-7.69% F1 score on two public datasets.
UR - https://www.scopus.com/pages/publications/85174067254
UR - https://www.scopus.com/pages/publications/85174067254#tab=citedBy
U2 - 10.18653/v1/2023.acl-long.815
DO - 10.18653/v1/2023.acl-long.815
M3 - Conference contribution
AN - SCOPUS:85174067254
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 14571
EP - 14589
BT - Long Papers
PB - Association for Computational Linguistics (ACL)
Y2 - 9 July 2023 through 14 July 2023
ER -