TY - GEN
T1 - Reinforcement Learning based Disease Progression Model for Alzheimer's Disease
AU - Saboo, Krishnakant V.
AU - Choudhary, Anirudh
AU - Cao, Yurui
AU - Worrell, Gregory A.
AU - Jones, David T.
AU - Iyer, Ravishankar K.
N1 - Funding Information:
This work was supported by the Mayo Clinic and Illinois Alliance Fellowship for Technology-based Healthcare Research and in part by NSF grants CNS-1337732, CNS-1624790, and CCF-2029049 and the Jump ARCHES endowment fund. We thank Saurabh Jha, Subho Banerjee, Frances Rigberg, and Prakruthi Burra for their valuable feedback.
Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - We model Alzheimer's disease (AD) progression by combining differential equations (DEs) and reinforcement learning (RL) with domain knowledge. DEs provide relationships between some, but not all, factors relevant to AD. We assume that the missing relationships must satisfy general criteria about the working of the brain, for e.g., maximizing cognition while minimizing the cost of supporting cognition. This allows us to extract the missing relationships by using RL to optimize an objective (reward) function that captures the above criteria. We use our model consisting of DEs (as a simulator) and the trained RL agent to predict individualized 10-year AD progression using baseline (year 0) features on synthetic and real data. The model was comparable or better at predicting 10-year cognition trajectories than state-of-the-art learning-based models. Our interpretable model demonstrated, and provided insights into, "recovery/compensatory" processes that mitigate the effect of AD, even though those processes were not explicitly encoded in the model. Our framework combines DEs with RL for modelling AD progression and has broad applicability for understanding other neurological disorders.
AB - We model Alzheimer's disease (AD) progression by combining differential equations (DEs) and reinforcement learning (RL) with domain knowledge. DEs provide relationships between some, but not all, factors relevant to AD. We assume that the missing relationships must satisfy general criteria about the working of the brain, for e.g., maximizing cognition while minimizing the cost of supporting cognition. This allows us to extract the missing relationships by using RL to optimize an objective (reward) function that captures the above criteria. We use our model consisting of DEs (as a simulator) and the trained RL agent to predict individualized 10-year AD progression using baseline (year 0) features on synthetic and real data. The model was comparable or better at predicting 10-year cognition trajectories than state-of-the-art learning-based models. Our interpretable model demonstrated, and provided insights into, "recovery/compensatory" processes that mitigate the effect of AD, even though those processes were not explicitly encoded in the model. Our framework combines DEs with RL for modelling AD progression and has broad applicability for understanding other neurological disorders.
UR - http://www.scopus.com/inward/record.url?scp=85132548582&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132548582&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85132548582
T3 - Advances in Neural Information Processing Systems
SP - 20903
EP - 20915
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -