Reinforcement Learning Trees

Ruoqing Zhu, Donglin Zeng, Michael R. Kosorok

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree uses the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that toward terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)1770-1784
Number of pages15
JournalJournal of the American Statistical Association
Volume110
Issue number512
DOIs
StatePublished - Oct 2 2015
Externally publishedYes

Keywords

  • Consistency
  • Error bound
  • Random forests
  • Reinforcement learning
  • Trees

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Reinforcement Learning Trees'. Together they form a unique fingerprint.

Cite this