TY - JOUR
T1 - Visual Servoing for Pose Control of Soft Continuum Arm in a Structured Environment
AU - Kamtikar, Shivani
AU - Marri, Samhita
AU - Walt, Benjamin
AU - Uppalapati, Naveen Kumar
AU - Krishnan, Girish
AU - Chowdhary, Girish
N1 - This work was supported in part by AIFARMS National AI Institute in Agriculture through Agriculture and Food Research Initiative (AFRI) under Grant 2020-67021-32799/Project, in part by the USDA National Institute of Food and Agriculture under Accession Number 1024178, in part by USDA-NSF NRI under Grant USDA 2019-67021-28989, in part by NSF under Grant 1830343, and in part by joint NSF-USDA COALESCE under Grant USDA 2021-67021-34418.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - For soft continuum arms, visual servoing is a popular control strategy that relies on visual feedback to close the control loop. However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models and sensors to perceive the shape of the arm, both of which can be hard to implement in a soft robot. This letter circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the distal end of the arm. A convolutional neural network is trained to predict the actuations required to achieve the desired pose in a structured environment. Integrated and modular approaches for estimating the actuations from the image are proposed and are experimentally compared. A proportional control law is implemented to reduce the error between the desired and current image as seen by the camera. The model together with the proportional feedback control makes the described approach robust to several variations such as new targets, lighting, loads, and diminution of the soft arm. Furthermore, the model lends itself to be transferred to a new environment with minimal effort.
AB - For soft continuum arms, visual servoing is a popular control strategy that relies on visual feedback to close the control loop. However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models and sensors to perceive the shape of the arm, both of which can be hard to implement in a soft robot. This letter circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the distal end of the arm. A convolutional neural network is trained to predict the actuations required to achieve the desired pose in a structured environment. Integrated and modular approaches for estimating the actuations from the image are proposed and are experimentally compared. A proportional control law is implemented to reduce the error between the desired and current image as seen by the camera. The model together with the proportional feedback control makes the described approach robust to several variations such as new targets, lighting, loads, and diminution of the soft arm. Furthermore, the model lends itself to be transferred to a new environment with minimal effort.
KW - Control
KW - Modeling
KW - Soft robot applications
KW - Visual servoing
KW - and learning for soft robots
UR - http://www.scopus.com/inward/record.url?scp=85125699834&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125699834&partnerID=8YFLogxK
U2 - 10.1109/LRA.2022.3155821
DO - 10.1109/LRA.2022.3155821
M3 - Article
AN - SCOPUS:85125699834
SN - 2377-3766
VL - 7
SP - 5504
EP - 5511
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
ER -