Defaults and relevance in model-based reasoning

Roni Khardon, Dan Roth

Research output: Contribution to journalArticlepeer-review

Abstract

Reasoning with model-based representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formula-based representations of knowledge. This paper studies these representations and further substantiates the claim regarding their advantages. In particular, model-based representations are shown to efficiently support reasoning in the presence of varying context information, handle efficiently fragments of Reiter's default logic and provide a useful way to integrate learning with reasoning. Furthermore, these results are closely related to the notion of relevance. The use of relevance information is best exemplified by the filtering process involved in the algorithm developed for reasoning within context. The relation of defaults to relevance is viewed through the notion of context, where the agent has to find plausible context information by using default rules. This view yields efficient algorithms for default reasoning. Finally, it is argued that these results support an incremental view of reasoning in a natural way, and the notion of relevance to the environment, captured by the Learning to Reason framework, is discussed.

Original languageEnglish (US)
Pages (from-to)169-193
Number of pages25
JournalArtificial Intelligence
Volume97
Issue number1-2
DOIs
StatePublished - Dec 1997
Externally publishedYes

Keywords

  • Common-sense reasoning
  • Context
  • Default reasoning
  • Knowledge representation
  • Learning to reason
  • Reasoning with models

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Defaults and relevance in model-based reasoning'. Together they form a unique fingerprint.

Cite this