Gesture based training of robots for manufacturing tasks

Pramod Chembrammel, T Kesavadas

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We report our work-in-progress on a new method to train an industrial robot that can learn from the demonstrations of manufacturing tasks by a skilled worker (trainer). A parametrized learning engine is trained based on identifiable features of the trainer's body and objects. To achieve this, we collected a large number of depth data. Different objects in the scene are clustered using Gaussian mixture model and are manually labelled. Features are engineered to train random decision forest. Feature engineering is required since the number of dimensions (number of depth points) of samples vary because of variations in depth capture. Depth samples are transformed to a lower dimension space of 96 dimensions defined by means and covariance of data distribution. This method has a classification accuracy of 80.72%. Using these features, the robot can identify parts in real-time, tag as well as track them as the trainer moves them during the demonstration. Our ongoing work is on semantic classification of the tracked data into high level actions which will be combined using a set of rules called action-grammar.

Original languageEnglish (US)
Title of host publicationAdvanced Manufacturing
PublisherAmerican Society of Mechanical Engineers (ASME)
ISBN (Electronic)9780791850527
StatePublished - 2016
EventASME 2016 International Mechanical Engineering Congress and Exposition, IMECE 2016 - Phoenix, United States
Duration: Nov 11 2016Nov 17 2016


OtherASME 2016 International Mechanical Engineering Congress and Exposition, IMECE 2016
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Mechanical Engineering


Dive into the research topics of 'Gesture based training of robots for manufacturing tasks'. Together they form a unique fingerprint.

Cite this