Investigating task performance of probabilistic topic models: An empirical study of PLSA and LDA

Yue Lu, Qiaozhu Mei, Cheng Xiang Zhai

Research output: Contribution to journalArticlepeer-review

Abstract

Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.

Original languageEnglish (US)
Pages (from-to)178-203
Number of pages26
JournalInformation Retrieval
Volume14
Issue number2
DOIs
StatePublished - Apr 2011

Keywords

  • Evaluation
  • Experimentation
  • LDA
  • PLSA
  • Performance
  • Topic models

ASJC Scopus subject areas

  • Information Systems
  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Investigating task performance of probabilistic topic models: An empirical study of PLSA and LDA'. Together they form a unique fingerprint.

Cite this