TY - JOUR
T1 - Elastica
T2 - A Compliant Mechanics Environment for Soft Robotic Control
AU - Naughton, Noel
AU - Sun, Jiarui
AU - Tekinalp, Arman
AU - Parthasarathy, Tejaswin
AU - Chowdhary, Girish
AU - Gazzola, Mattia
N1 - Manuscript received October 15, 2020; accepted February 20, 2021. Date of publication March 3, 2021; date of current version March 23, 2021. This letter was recommended for publication by Associate Editor C. Duriez and Editor C. Laschi upon evaluation of the reviewers\u2019 comments. This work was supported by ONR MURI N00014-19-1-2373, NSF/USDA NRI 2.0 #2019-67021-28989; NSF EFRI #1830881, NSF CAREER #1846752, Blue Waters project (OCI-0725070, ACI-1238993) and XSEDE allocation TG-MCB190004 on TACC\u2019s Stampede2. (Corresponding author: Mattia Gazzola.) The authors are with the Grainger College of Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA (e-mail: nnaught2@ illinois.edu; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).
PY - 2021/4
Y1 - 2021/4
N2 - Soft robots are notoriously hard to control. This is partly due to the scarcity of models and simulators able to capture their complex continuum mechanics, resulting in a lack of control methodologies that take full advantage of body compliance. Currently available methods are either too computational demanding or overly simplistic in their physical assumptions, leading to a paucity of available simulation resources for developing such control schemes. To address this, we introduce Elastica, an open-source simulation environment modeling the dynamics of soft, slender rods that can bend, twist, shear, and stretch. We couple Elastica with five state-of-The-Art reinforcement learning (RL) algorithms (TRPO, PPO, DDPG, TD3, and SAC). We successfully demonstrate distributed, dynamic control of a soft robotic arm in four scenarios with both large action spaces, where RL learning is difficult, and small action spaces, where the RL actor must learn to interact with its environment. Training converges in 10 million policy evaluations with near real-Time evaluation of learned policies.
AB - Soft robots are notoriously hard to control. This is partly due to the scarcity of models and simulators able to capture their complex continuum mechanics, resulting in a lack of control methodologies that take full advantage of body compliance. Currently available methods are either too computational demanding or overly simplistic in their physical assumptions, leading to a paucity of available simulation resources for developing such control schemes. To address this, we introduce Elastica, an open-source simulation environment modeling the dynamics of soft, slender rods that can bend, twist, shear, and stretch. We couple Elastica with five state-of-The-Art reinforcement learning (RL) algorithms (TRPO, PPO, DDPG, TD3, and SAC). We successfully demonstrate distributed, dynamic control of a soft robotic arm in four scenarios with both large action spaces, where RL learning is difficult, and small action spaces, where the RL actor must learn to interact with its environment. Training converges in 10 million policy evaluations with near real-Time evaluation of learned policies.
KW - And learning for soft robots
KW - control
KW - modeling
KW - reinforcement learning
KW - simulation and animation
UR - http://www.scopus.com/inward/record.url?scp=85102239556&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102239556&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3063698
DO - 10.1109/LRA.2021.3063698
M3 - Article
AN - SCOPUS:85102239556
SN - 2377-3766
VL - 6
SP - 3389
EP - 3396
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 9369003
ER -