Using recurrent neural networks for slot filling in spoken language understanding

Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, Geoffrey Zweig

Research output: Contribution to journalArticlepeer-review

Abstract

Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain.

Original languageEnglish (US)
Article number6998838
Pages (from-to)530-539
Number of pages10
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume23
Issue number3
DOIs
StatePublished - Mar 1 2015
Externally publishedYes

Keywords

  • Recurrent neural network (RNN)
  • slot filling
  • spoken language understanding (SLU)
  • word embedding

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Using recurrent neural networks for slot filling in spoken language understanding'. Together they form a unique fingerprint.

Cite this