TY - GEN
T1 - Learning an Action-Conditional Model for Haptic Texture Generation
AU - Heravi, Negin
AU - Yuan, Wenzhen
AU - Okamura, Allison M.
AU - Bohg, Jeannette
N1 - Funding Information:
N. Heravi and A. M. Okamura are with the Department of Mechanical Engineering, Stanford University. W. Yuan is with the Robotics Institute at Carnegie Mellon University. J. Bohg is with the Department of Computer Science, Stanford University.[nheravi,aokamura,bohg]@stanford.edu, [email protected]. N. Heravi was supported by the NSF Graduate Research Fellowship. This work has been partially supported by Amazon.com, Inc. through an Amazon Research Award. This article solely reflects the opinions and conclusions of its authors and not of Amazon or any entity associated with Amazon.com. Research reported in this publication was also partially supported by the 2019 Seed Grant from the Stanford Institute for the Human-Centered Artificial Intelligence (HAI). We thank Katherine Kuchenbecker and Yasemin Vardar for giving us access to the textures used in the Penn Haptic Texture Toolkit (HaTT), Shaoxiong Wang for the GelSight sensor, and Heather Culbertson for answering our questions regarding HaTT.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Rich haptic sensory feedback in response to user interactions is desirable for an effective, immersive virtual reality or teleoperation system. However, this feedback depends on material properties and user interactions in a complex, non-linear manner. Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input. Current methodologies are typically conditioned on user interactions, but require a separate model for each material. In this paper, we present a learned action-conditional model that uses data from a vision-based tactile sensor (GelSight) and user's action as input. This model predicts an induced acceleration that could be used to provide haptic vibration feedback to a user. We trained our proposed model on a publicly available dataset (Penn Haptic Texture Toolkit) that we augmented with GelSight measurements of the different materials. We show that a unified model over all materials outperforms previous methods and generalizes to new actions and new instances of the material categories in the dataset.
AB - Rich haptic sensory feedback in response to user interactions is desirable for an effective, immersive virtual reality or teleoperation system. However, this feedback depends on material properties and user interactions in a complex, non-linear manner. Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input. Current methodologies are typically conditioned on user interactions, but require a separate model for each material. In this paper, we present a learned action-conditional model that uses data from a vision-based tactile sensor (GelSight) and user's action as input. This model predicts an induced acceleration that could be used to provide haptic vibration feedback to a user. We trained our proposed model on a publicly available dataset (Penn Haptic Texture Toolkit) that we augmented with GelSight measurements of the different materials. We show that a unified model over all materials outperforms previous methods and generalizes to new actions and new instances of the material categories in the dataset.
UR - http://www.scopus.com/inward/record.url?scp=85092709202&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092709202&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9197447
DO - 10.1109/ICRA40945.2020.9197447
M3 - Conference contribution
AN - SCOPUS:85092709202
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 11088
EP - 11095
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -