Agent-based model construction using inverse reinforcement learning

Kamwoo Lee, Mark Rucker, William T. Scherer, Peter A. Beling, Matthew S. Gerber, Hyojung Kang

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Agent-based modeling (ABM) assumes that behavioral rules affecting an agent's states and actions are known. However, discovering these rules is often challenging and requires deep insight about an agent's behaviors. Inverse reinforcement learning (IRL) can complement ABM by providing a systematic way to find behavioral rules from data. IRL frames learning behavioral rules as a problem of recovering motivations from observed behavior and generating rules consistent with these motivations. In this paper, we propose a method to construct an agent-based model directly from data using IRL. We explain each step of the proposed method and describe challenges that may occur during implementation. Our experimental results show that the proposed method can extract rules and construct an agent-based model with rich but concise behavioral rules for agents while still maintaining aggregate-level properties.

Original languageEnglish (US)
Title of host publication2017 Winter Simulation Conference, WSC 2017
EditorsVictor Chan
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages12
ISBN (Electronic)9781538634288
StatePublished - Jun 28 2017
Externally publishedYes
Event2017 Winter Simulation Conference, WSC 2017 - Las Vegas, United States
Duration: Dec 3 2017Dec 6 2017

Publication series

NameProceedings - Winter Simulation Conference
ISSN (Print)0891-7736


Other2017 Winter Simulation Conference, WSC 2017
Country/TerritoryUnited States
CityLas Vegas

ASJC Scopus subject areas

  • Software
  • Modeling and Simulation
  • Computer Science Applications


Dive into the research topics of 'Agent-based model construction using inverse reinforcement learning'. Together they form a unique fingerprint.

Cite this