Learning Topic Models: Identifiability and Finite-Sample Analysis

Yinyin Chen, Shishuang He, Yun Yang, Feng Liang

Research output: Contribution to journalArticlepeer-review

Abstract

Topic models provide a useful text-mining tool for learning, extracting, and discovering latent structures in large text corpora. Although a plethora of methods have been proposed for topic modeling, lacking in the literature is a formal theoretical investigation of the statistical identifiability and accuracy of latent topic estimation. In this article, we propose a maximum likelihood estimator (MLE) of latent topics based on a specific integrated likelihood that is naturally connected to the concept, in computational geometry, of volume minimization. Our theory introduces a new set of geometric conditions for topic model identifiability, conditions that are weaker than conventional separability conditions, which typically rely on the existence of pure topic documents or of anchor words. Weaker conditions allow a wider and thus potentially more fruitful investigation. We conduct finite-sample error analysis for the proposed estimator and discuss connections between our results and those of previous investigations. We conclude with empirical studies employing both simulated and real datasets. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)2860-2875
Number of pages16
JournalJournal of the American Statistical Association
Volume118
Issue number544
Early online dateJul 19 2022
DOIs
StatePublished - 2023

Keywords

  • Finite-sample analysis
  • Identifiability
  • Maximum likelihood
  • Sufficiently scattered
  • Topic models
  • Volume minimization

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Learning Topic Models: Identifiability and Finite-Sample Analysis'. Together they form a unique fingerprint.

Cite this