TY - GEN
T1 - Distributionally Robust Imitation Learning
T2 - AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2026
AU - Gahlawat, Aditya
AU - Aboudonia, Ahmed
AU - Banik, Sandeep
AU - Hovakimyan, Naira
AU - Matni, Nikolai
AU - Ames, Aaron D.
AU - Zardini, Gioele
AU - Speranzon, Alberto
N1 - This work is supported by the Air Force Office of Scientific Research Grant (AFOSR) Grant FA9550-21-1-0411, the National Aeronautics and Space Administration (NASA) under Grants 80NSSC22M0070 and 80NSSC20M0229, and by the National Science Foundation (NSF) under Grants CMMI 2135925 and IIS 2331878.
PY - 2026
Y1 - 2026
N2 - Imitation learning (IL) enables autonomous behavior by learning from expert demonstrations. While more sample-efficient than comparative alternatives like reinforcement learning, IL is sensitive to compounding errors induced by distribution shifts. There are two significant sources of distribution shifts when using IL-based feedback laws on systems: distribution shifts caused by policy error and distribution shifts due to exogenous disturbances and endogenous model errors due to lack of learning. Our previously developed approaches, Taylor Series Imitation Learning (TaSIL) and L1-Distributionally Robust Adaptive Control (L1-DRAC), address the challenge of distribution shifts in complementary ways. While TaSIL offers robustness against policy error-induced distribution shifts, L1-DRAC offers robustness against distribution shifts due to aleatoric and epistemic uncertainties. To enable certifiable IL for learned and/or uncertain dynamical systems, we formulate Distributionally Robust Imitation Policy (DRIP) architecture, a Layered Control Architecture (LCA) that integrates TaSIL and L1-DRAC. By judiciously designing individual layer-centric input and output requirements, we show how we can guarantee certificates for the entire control pipeline. Our solution paves the path for designing fully certifiable autonomy pipelines, by integrating learning-based components, such as perception, with certifiable model-based decision-making through the proposed LCA approach.
AB - Imitation learning (IL) enables autonomous behavior by learning from expert demonstrations. While more sample-efficient than comparative alternatives like reinforcement learning, IL is sensitive to compounding errors induced by distribution shifts. There are two significant sources of distribution shifts when using IL-based feedback laws on systems: distribution shifts caused by policy error and distribution shifts due to exogenous disturbances and endogenous model errors due to lack of learning. Our previously developed approaches, Taylor Series Imitation Learning (TaSIL) and L1-Distributionally Robust Adaptive Control (L1-DRAC), address the challenge of distribution shifts in complementary ways. While TaSIL offers robustness against policy error-induced distribution shifts, L1-DRAC offers robustness against distribution shifts due to aleatoric and epistemic uncertainties. To enable certifiable IL for learned and/or uncertain dynamical systems, we formulate Distributionally Robust Imitation Policy (DRIP) architecture, a Layered Control Architecture (LCA) that integrates TaSIL and L1-DRAC. By judiciously designing individual layer-centric input and output requirements, we show how we can guarantee certificates for the entire control pipeline. Our solution paves the path for designing fully certifiable autonomy pipelines, by integrating learning-based components, such as perception, with certifiable model-based decision-making through the proposed LCA approach.
UR - https://www.scopus.com/pages/publications/105031187324
UR - https://www.scopus.com/pages/publications/105031187324#tab=citedBy
U2 - 10.2514/6.2026-2169
DO - 10.2514/6.2026-2169
M3 - Conference contribution
AN - SCOPUS:105031187324
SN - 9781624107658
T3 - AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2026
BT - AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2026
PB - American Institute of Aeronautics and Astronautics Inc, AIAA
Y2 - 12 January 2026 through 16 January 2026
ER -