Adapting sequence to sequence models for text normalization in social media

Ismini Lourentzou, Kabir Manghnani, Chengxiang Zhai

Research output: Contribution to conferencePaper

Abstract

Social media offer an abundant source of valuable raw data, however informal writing can quickly become a bottleneck for many natural language processing (NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot explicitly handle noise found in short online posts. Moreover, the variety of frequently occurring linguistic variations presents several challenges, even for humans who might not be able to comprehend the meaning of such posts, especially when they contain slang and abbreviations. Text Normalization aims to transform online user-generated text to a canonical form. Current text normalization systems rely on string or phonetic similarity and classification models that work on a local fashion. We argue that processing contextual information is crucial for this task and introduce a social media text normalization hybrid word-character attention-based encoder-decoder model that can serve as a pre-processing step for NLP applications to adapt to noisy text in social media. Our character-based component is trained on synthetic adversarial examples that are designed to capture errors commonly found in online user-generated text. Experiments show that our model surpasses neural architectures designed for text normalization and achieves comparable performance with state-of-the-art related work.

Original languageEnglish (US)
Pages335-345
Number of pages11
StatePublished - Jan 1 2019
Event13th International Conference on Web and Social Media, ICWSM 2019 - Munich, Germany
Duration: Jun 11 2019Jun 14 2019

Conference

Conference13th International Conference on Web and Social Media, ICWSM 2019
CountryGermany
CityMunich
Period6/11/196/14/19

Fingerprint

Processing
Speech analysis
Linguistics
Experiments

ASJC Scopus subject areas

  • Computer Networks and Communications

Cite this

Lourentzou, I., Manghnani, K., & Zhai, C. (2019). Adapting sequence to sequence models for text normalization in social media. 335-345. Paper presented at 13th International Conference on Web and Social Media, ICWSM 2019, Munich, Germany.

Adapting sequence to sequence models for text normalization in social media. / Lourentzou, Ismini; Manghnani, Kabir; Zhai, Chengxiang.

2019. 335-345 Paper presented at 13th International Conference on Web and Social Media, ICWSM 2019, Munich, Germany.

Research output: Contribution to conferencePaper

Lourentzou, I, Manghnani, K & Zhai, C 2019, 'Adapting sequence to sequence models for text normalization in social media' Paper presented at 13th International Conference on Web and Social Media, ICWSM 2019, Munich, Germany, 6/11/19 - 6/14/19, pp. 335-345.
Lourentzou I, Manghnani K, Zhai C. Adapting sequence to sequence models for text normalization in social media. 2019. Paper presented at 13th International Conference on Web and Social Media, ICWSM 2019, Munich, Germany.
Lourentzou, Ismini ; Manghnani, Kabir ; Zhai, Chengxiang. / Adapting sequence to sequence models for text normalization in social media. Paper presented at 13th International Conference on Web and Social Media, ICWSM 2019, Munich, Germany.11 p.
@conference{85184538567241f6a53609127bbf52aa,
title = "Adapting sequence to sequence models for text normalization in social media",
abstract = "Social media offer an abundant source of valuable raw data, however informal writing can quickly become a bottleneck for many natural language processing (NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot explicitly handle noise found in short online posts. Moreover, the variety of frequently occurring linguistic variations presents several challenges, even for humans who might not be able to comprehend the meaning of such posts, especially when they contain slang and abbreviations. Text Normalization aims to transform online user-generated text to a canonical form. Current text normalization systems rely on string or phonetic similarity and classification models that work on a local fashion. We argue that processing contextual information is crucial for this task and introduce a social media text normalization hybrid word-character attention-based encoder-decoder model that can serve as a pre-processing step for NLP applications to adapt to noisy text in social media. Our character-based component is trained on synthetic adversarial examples that are designed to capture errors commonly found in online user-generated text. Experiments show that our model surpasses neural architectures designed for text normalization and achieves comparable performance with state-of-the-art related work.",
author = "Ismini Lourentzou and Kabir Manghnani and Chengxiang Zhai",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
pages = "335--345",
note = "13th International Conference on Web and Social Media, ICWSM 2019 ; Conference date: 11-06-2019 Through 14-06-2019",

}

TY - CONF

T1 - Adapting sequence to sequence models for text normalization in social media

AU - Lourentzou, Ismini

AU - Manghnani, Kabir

AU - Zhai, Chengxiang

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Social media offer an abundant source of valuable raw data, however informal writing can quickly become a bottleneck for many natural language processing (NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot explicitly handle noise found in short online posts. Moreover, the variety of frequently occurring linguistic variations presents several challenges, even for humans who might not be able to comprehend the meaning of such posts, especially when they contain slang and abbreviations. Text Normalization aims to transform online user-generated text to a canonical form. Current text normalization systems rely on string or phonetic similarity and classification models that work on a local fashion. We argue that processing contextual information is crucial for this task and introduce a social media text normalization hybrid word-character attention-based encoder-decoder model that can serve as a pre-processing step for NLP applications to adapt to noisy text in social media. Our character-based component is trained on synthetic adversarial examples that are designed to capture errors commonly found in online user-generated text. Experiments show that our model surpasses neural architectures designed for text normalization and achieves comparable performance with state-of-the-art related work.

AB - Social media offer an abundant source of valuable raw data, however informal writing can quickly become a bottleneck for many natural language processing (NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot explicitly handle noise found in short online posts. Moreover, the variety of frequently occurring linguistic variations presents several challenges, even for humans who might not be able to comprehend the meaning of such posts, especially when they contain slang and abbreviations. Text Normalization aims to transform online user-generated text to a canonical form. Current text normalization systems rely on string or phonetic similarity and classification models that work on a local fashion. We argue that processing contextual information is crucial for this task and introduce a social media text normalization hybrid word-character attention-based encoder-decoder model that can serve as a pre-processing step for NLP applications to adapt to noisy text in social media. Our character-based component is trained on synthetic adversarial examples that are designed to capture errors commonly found in online user-generated text. Experiments show that our model surpasses neural architectures designed for text normalization and achieves comparable performance with state-of-the-art related work.

UR - http://www.scopus.com/inward/record.url?scp=85070367348&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070367348&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85070367348

SP - 335

EP - 345

ER -