TY - GEN
T1 - ShapeMap 3-D
T2 - 39th IEEE International Conference on Robotics and Automation, ICRA 2022
AU - Suresh, Sudharshan
AU - Si, Zilin
AU - Mangelson, Joshua G.
AU - Yuan, Wenzhen
AU - Kaess, Michael
N1 - Funding Information:
* Authors with equal contribution 1§udharshan Suresh, Zilin Si, Wenzhen Yuan, and Michael Kaess are with the Robotics Institute, Carnegie Mellon University <sudhars1, zsi, wenzheny, kaess>[email protected] ? Joshua G. Mangelson is with the Electrical and Computer Engineering Department, Brigham Young University joshua_mangelson@byu. edu This work was partially supported by the National Science Foundation under award IIS-2008279. We thank Timothy Man for sensor hardware support, and Shubham Kanitkar for help with the robot arm. Code: = www. github. com/rpl-cmu/shape-map3-D Dataset: www.github.com/CMURoboTouch/YCB-Sight
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.
AB - Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.
UR - http://www.scopus.com/inward/record.url?scp=85136323790&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85136323790&partnerID=8YFLogxK
U2 - 10.1109/ICRA46639.2022.9812040
DO - 10.1109/ICRA46639.2022.9812040
M3 - Conference contribution
AN - SCOPUS:85136323790
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 7073
EP - 7080
BT - 2022 IEEE International Conference on Robotics and Automation, ICRA 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -