Estimating Optimal Infinite Horizon Dynamic Treatment Regimes via pT-Learning

Wenzhuo Zhou, Ruoqing Zhu, Annie Qu

Research output: Contribution to journalArticlepeer-review


Recent advances in mobile health (mHealth) technology provide an effective way to monitor individuals’ health statuses and deliver just-in-time personalized interventions. However, the practical use of mHealth technology raises unique challenges to existing methodologies on learning an optimal dynamic treatment regime. Many mHealth applications involve decision-making with large numbers of intervention options and under an infinite time horizon setting where the number of decision stages diverges to infinity. In addition, temporary medication shortages may cause optimal treatments to be unavailable, while it is unclear what alternatives can be used. To address these challenges, we propose a Proximal Temporal consistency Learning (pT-Learning) framework to estimate an optimal regime that is adaptively adjusted between deterministic and stochastic sparse policy models. The resulting minimax estimator avoids the double sampling issue in the existing algorithms. It can be further simplified and can easily incorporate off-policy data without mismatched distribution corrections. We study theoretical properties of the sparse policy and establish finite-sample bounds on the excess risk and performance error. The proposed method is provided in our proximalDTR package and is evaluated through extensive simulation studies and the OhioT1DM mHealth dataset. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)625-638
Number of pages14
JournalJournal of the American Statistical Association
Issue number545
StatePublished - 2024


  • Policy optimization
  • Precision medicine
  • Reinforcement learning
  • Sparse policy

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Estimating Optimal Infinite Horizon Dynamic Treatment Regimes via pT-Learning'. Together they form a unique fingerprint.

Cite this