TY - GEN
T1 - Constrained sequence-to-sequence semitic root extraction for enriching word embeddings
AU - El-Kishky, Ahmed
AU - Fu, Xingyu
AU - Addawood, Aseel
AU - Sobh, Nahil
AU - Voss, Clare
AU - Han, Jiawei
N1 - Publisher Copyright:
© ACL 2019.All right reserved.
PY - 2019
Y1 - 2019
N2 - In this paper, we tackle the problem of "root extraction" from words in the Semitic language family. A challenge in applying natural language processing techniques to these languages is the data sparsity problem that arises from their rich internal morphology, where the substructure is inherently non-concatenative and morphemes are interdigitated in word formation. While previous automated methods have relied on human-curated rules or multiclass classification, they have not fully leveraged the various combinations of regular, sequential concatenative morphology within the words and the internal interleaving within templatic stems of roots and patterns. To address this, we propose a constrained sequence-tosequence root extraction method. Experimental results show our constrained model outperforms a variety of methods at root extraction. Furthermore, by enriching word embeddings with resulting decompositions, we show improved results on word analogy, word similarity, and language modeling tasks.
AB - In this paper, we tackle the problem of "root extraction" from words in the Semitic language family. A challenge in applying natural language processing techniques to these languages is the data sparsity problem that arises from their rich internal morphology, where the substructure is inherently non-concatenative and morphemes are interdigitated in word formation. While previous automated methods have relied on human-curated rules or multiclass classification, they have not fully leveraged the various combinations of regular, sequential concatenative morphology within the words and the internal interleaving within templatic stems of roots and patterns. To address this, we propose a constrained sequence-tosequence root extraction method. Experimental results show our constrained model outperforms a variety of methods at root extraction. Furthermore, by enriching word embeddings with resulting decompositions, we show improved results on word analogy, word similarity, and language modeling tasks.
UR - http://www.scopus.com/inward/record.url?scp=85120992174&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120992174&partnerID=8YFLogxK
U2 - 10.18653/v1/W19-4610
DO - 10.18653/v1/W19-4610
M3 - Conference contribution
AN - SCOPUS:85120992174
T3 - ACL 2019 - 4th Arabic Natural Language Processing Workshop, WANLP 2019 - Proceedings of the Workshop
SP - 88
EP - 96
BT - ACL 2019 - 4th Arabic Natural Language Processing Workshop, WANLP 2019 - Proceedings of the Workshop
PB - Association for Computational Linguistics (ACL)
T2 - 4th Arabic Natural Language Processing Workshop, WANLP 2019, held at ACL 2019
Y2 - 1 August 2019
ER -