Qualitative Controller Synthesis for Consumption Markov Decision Processes

František Blahoudek, Tomáš Brázdil, Petr Novotný, Melkior Ornik, Pranay Thangeda, Ufuk Topcu

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Consumption Markov Decision Processes (CMDPs) are probabilistic decision-making models of resource-constrained systems. In a CMDP, the controller possesses a certain amount of a critical resource, such as electric power. Each action of the controller can consume some amount of the resource. Resource replenishment is only possible in special reload states, in which the resource level can be reloaded up to the full capacity of the system. The task of the controller is to prevent resource exhaustion, i.e. ensure that the available amount of the resource stays non-negative, while ensuring an additional linear-time property. We study the complexity of strategy synthesis in consumption MDPs with almost-sure Büchi objectives. We show that the problem can be solved in polynomial time. We implement our algorithm and show that it can efficiently solve CMDPs modelling real-world scenarios.

Original languageEnglish (US)
Title of host publicationComputer Aided Verification - 32nd International Conference, CAV 2020, Proceedings
EditorsShuvendu K. Lahiri, Chao Wang
Number of pages27
ISBN (Print)9783030532901
StatePublished - 2020
Event32nd International Conference on Computer Aided Verification, CAV 2020 - Los Angeles, United States
Duration: Jul 21 2020Jul 24 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12225 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference32nd International Conference on Computer Aided Verification, CAV 2020
CountryUnited States
CityLos Angeles

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Qualitative Controller Synthesis for Consumption Markov Decision Processes'. Together they form a unique fingerprint.

Cite this