TY - GEN
T1 - Open loop position control of soft continuum arm using deep reinforcement learning
AU - Satheeshbabu, Sreeshankar
AU - Uppalapati, Naveen Kumar
AU - Chowdhary, Girish
AU - Krishnan, Girish
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - Soft robots undergo large nonlinear spatial deformations due to both inherent actuation and external loading. The physics underlying these deformations is complex, and often requires intricate analytical and numerical models. The complexity of these models may render traditional model-based control difficult and unsuitable. Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques. In this paper, we present a model-free approach for open loop position control of a soft spatial continuum arm, based on deep reinforcement learning. The continuum arm is pneumatically actuated and attains a spatial work-space by a combination of unidirectional bending and bidirectional torsional deformation. We use Deep-Q Learning with experience replay to train the system in simulation. The efficacy and robustness of the control policy obtained from the system is validated both in simulation and on the continuum arm prototype for varying external loading conditions.
AB - Soft robots undergo large nonlinear spatial deformations due to both inherent actuation and external loading. The physics underlying these deformations is complex, and often requires intricate analytical and numerical models. The complexity of these models may render traditional model-based control difficult and unsuitable. Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques. In this paper, we present a model-free approach for open loop position control of a soft spatial continuum arm, based on deep reinforcement learning. The continuum arm is pneumatically actuated and attains a spatial work-space by a combination of unidirectional bending and bidirectional torsional deformation. We use Deep-Q Learning with experience replay to train the system in simulation. The efficacy and robustness of the control policy obtained from the system is validated both in simulation and on the continuum arm prototype for varying external loading conditions.
UR - http://www.scopus.com/inward/record.url?scp=85071495649&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071495649&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2019.8793653
DO - 10.1109/ICRA.2019.8793653
M3 - Conference contribution
AN - SCOPUS:85071495649
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 5133
EP - 5139
BT - 2019 International Conference on Robotics and Automation, ICRA 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Conference on Robotics and Automation, ICRA 2019
Y2 - 20 May 2019 through 24 May 2019
ER -