TY - JOUR

T1 - Data-driven inverse optimization with imperfect information

AU - Mohajerin Esfahani, Peyman

AU - Shafieezadeh-Abadeh, Soroosh

AU - Hanasusanto, Grani A.

AU - Kuhn, Daniel

N1 - Funding Information:
Acknowledgements This work was supported by the Swiss National Science Foundation grant BSCGI0_157733.
Publisher Copyright:
© 2017, Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent’s objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent’s true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the predicted decision (i.e., the decision implied by a particular candidate objective) differs from the agent’s actual response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches.

AB - In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent’s objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent’s true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the predicted decision (i.e., the decision implied by a particular candidate objective) differs from the agent’s actual response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches.

KW - 90C25 Convex programming

KW - 90C47 Minimax problems

KW - C15 Stochastic programming

UR - http://www.scopus.com/inward/record.url?scp=85037173506&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85037173506&partnerID=8YFLogxK

U2 - 10.1007/s10107-017-1216-6

DO - 10.1007/s10107-017-1216-6

M3 - Article

AN - SCOPUS:85037173506

SN - 0025-5610

VL - 167

SP - 191

EP - 234

JO - Mathematical Programming

JF - Mathematical Programming

IS - 1

ER -