Abstract

Soft robots are notoriously hard to control. This is partly due to the scarcity of models and simulators able to capture their complex continuum mechanics, resulting in a lack of control methodologies that take full advantage of body compliance. Currently available methods are either too computational demanding or overly simplistic in their physical assumptions, leading to a paucity of available simulation resources for developing such control schemes. To address this, we introduce Elastica, an open-source simulation environment modeling the dynamics of soft, slender rods that can bend, twist, shear, and stretch. We couple Elastica with five state-of-The-Art reinforcement learning (RL) algorithms (TRPO, PPO, DDPG, TD3, and SAC). We successfully demonstrate distributed, dynamic control of a soft robotic arm in four scenarios with both large action spaces, where RL learning is difficult, and small action spaces, where the RL actor must learn to interact with its environment. Training converges in 10 million policy evaluations with near real-Time evaluation of learned policies.

Original languageEnglish (US)
Article number9369003
Pages (from-to)3389-3396
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number2
DOIs
StatePublished - Apr 2021

Keywords

  • And learning for soft robots
  • control
  • modeling
  • reinforcement learning
  • simulation and animation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Elastica: A Compliant Mechanics Environment for Soft Robotic Control'. Together they form a unique fingerprint.

Cite this