TY - GEN
T1 - Gesture based training of robots for manufacturing tasks
AU - Chembrammel, Pramod
AU - Kesavadas, T
N1 - Publisher Copyright:
Copyright © 2016 by ASME.
PY - 2016
Y1 - 2016
N2 - We report our work-in-progress on a new method to train an industrial robot that can learn from the demonstrations of manufacturing tasks by a skilled worker (trainer). A parametrized learning engine is trained based on identifiable features of the trainer's body and objects. To achieve this, we collected a large number of depth data. Different objects in the scene are clustered using Gaussian mixture model and are manually labelled. Features are engineered to train random decision forest. Feature engineering is required since the number of dimensions (number of depth points) of samples vary because of variations in depth capture. Depth samples are transformed to a lower dimension space of 96 dimensions defined by means and covariance of data distribution. This method has a classification accuracy of 80.72%. Using these features, the robot can identify parts in real-time, tag as well as track them as the trainer moves them during the demonstration. Our ongoing work is on semantic classification of the tracked data into high level actions which will be combined using a set of rules called action-grammar.
AB - We report our work-in-progress on a new method to train an industrial robot that can learn from the demonstrations of manufacturing tasks by a skilled worker (trainer). A parametrized learning engine is trained based on identifiable features of the trainer's body and objects. To achieve this, we collected a large number of depth data. Different objects in the scene are clustered using Gaussian mixture model and are manually labelled. Features are engineered to train random decision forest. Feature engineering is required since the number of dimensions (number of depth points) of samples vary because of variations in depth capture. Depth samples are transformed to a lower dimension space of 96 dimensions defined by means and covariance of data distribution. This method has a classification accuracy of 80.72%. Using these features, the robot can identify parts in real-time, tag as well as track them as the trainer moves them during the demonstration. Our ongoing work is on semantic classification of the tracked data into high level actions which will be combined using a set of rules called action-grammar.
UR - http://www.scopus.com/inward/record.url?scp=85021656633&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021656633&partnerID=8YFLogxK
U2 - 10.1115/IMECE201668206
DO - 10.1115/IMECE201668206
M3 - Conference contribution
AN - SCOPUS:85021656633
T3 - ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
BT - Advanced Manufacturing
PB - American Society of Mechanical Engineers (ASME)
T2 - ASME 2016 International Mechanical Engineering Congress and Exposition, IMECE 2016
Y2 - 11 November 2016 through 17 November 2016
ER -