Building explainable artificial intelligence systems

Mark G. Core, H. C. Lane, Michael Van Lent, Dave Gomboc, Steve Solomon, Milton Rosenberg

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems are not modular and not portable; they are tied to a particular AI system. In this paper, we present a modular and generic architecture for explaining the behavior of simulated entities. We describe its application to the Virtual Humans, a simulation designed to teach soft skills such as negotiation and cultural awareness.

Original languageEnglish (US)
Title of host publicationProceedings of the 21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference, AAAI-06/IAAI-06
Pages1766-1773
Number of pages8
StatePublished - Nov 13 2006
Externally publishedYes
Event21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference, AAAI-06/IAAI-06 - Boston, MA, United States
Duration: Jul 16 2006Jul 20 2006

Publication series

NameProceedings of the National Conference on Artificial Intelligence
Volume2

Other

Other21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference, AAAI-06/IAAI-06
Country/TerritoryUnited States
CityBoston, MA
Period7/16/067/20/06

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Building explainable artificial intelligence systems'. Together they form a unique fingerprint.

Cite this