Still no lie detector for language models: probing empirical and conceptual roadblocks

Benjamin A. Levinstein, Daniel A. Herrmann

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical nature of the problem. With this lesson in hand, we evaluate two existing approaches for measuring the beliefs of LLMs, one due to Azaria and Mitchell (The internal state of an llm knows when its lying, 2023) and the other to Burns et al. (Discovering latent knowledge in language models without supervision, 2022). Moving from the armchair to the desk chair, we provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, there is still no lie-detector for LLMs. We conclude by suggesting some concrete paths for future work.

Original languageEnglish (US)
JournalPhilosophical Studies
DOIs
StateAccepted/In press - 2024

Keywords

  • CCS
  • Interpretability
  • Large language models
  • Probes

ASJC Scopus subject areas

  • Philosophy

Fingerprint

Dive into the research topics of 'Still no lie detector for language models: probing empirical and conceptual roadblocks'. Together they form a unique fingerprint.

Cite this