Abstract
The performance of existing approaches for dialogue state tracking (DST) is often limited by the deficiency of labeled datasets, and inefficient utilization of data is also a practical yet tough problem of the DST task. In this paper, we aim to tackle these challenges in a self-supervised manner by introducing an auxiliary pre-training task that learns to pick up the correct dialogue response from a group of candidates. Moreover, we propose an attention flow mechanism that is augmented with a soft-threshold function in a dynamic way to better understand the user intent and filter out the redundant information. Extensive experiments on the multi-domain dialogue state tracking dataset MultiWOZ 2.1 demonstrate the effectiveness of our proposed method, and we also show that it is able to adapt to zero/few-shot cases under the proposed self-supervised framework.
Original language | English (US) |
---|---|
Pages (from-to) | 279-286 |
Number of pages | 8 |
Journal | Neurocomputing |
Volume | 440 |
DOIs | |
State | Published - Jun 14 2021 |
Keywords
- Attention mechanism
- Dialogue state tracking
- Self-supervised learning
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence