Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers

Mani Golparvar-Fard, Arsalan Heydarian, Juan Carlos Niebles

Research output: Contribution to journalArticlepeer-review

Abstract

Video recordings of earthmoving construction operations provide understandable data that can be used for benchmarking and analyzing their performance. These recordings further support project managers to take corrective actions on performance deviations and in turn improve operational efficiency. Despite these benefits, manual stopwatch studies of previously recorded videos can be labor-intensive, may suffer from biases of the observers, and are impractical after substantial period of observations. This paper presents a new computer vision based algorithm for recognizing single actions of earthmoving construction equipment. This is particularly a challenging task as equipment can be partially occluded in site video streams and usually come in wide variety of sizes and appearances. The scale and pose of the equipment actions can also significantly vary based on the camera configurations. In the proposed method, a video is initially represented as a collection of spatio-temporal visual features by extracting space-time interest points and describing each feature with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of the spatio-temporal features and action categories using a multi-class Support Vector Machine (SVM) classifier. This strategy handles noisy feature points arisen from typical dynamic backgrounds. Given a video sequence captured from a fixed camera, the multi-class SVM classifier recognizes and localizes equipment actions. For the purpose of evaluation, a new video dataset is introduced which contains 859 sequences from excavator and truck actions. This dataset contains large variations of equipment pose and scale, and has varied backgrounds and levels of occlusion. The experimental results with average accuracies of 86.33% and 98.33% show that our supervised method outperforms previous algorithms for excavator and truck action recognition. The results hold the promise for applicability of the proposed method for construction activity analysis.

Original languageEnglish (US)
Pages (from-to)652-663
Number of pages12
JournalAdvanced Engineering Informatics
Volume27
Issue number4
DOIs
StatePublished - Oct 2013

Keywords

  • Action recognition
  • Activity analysis
  • Computer vision
  • Construction productivity
  • Operational efficiency
  • Time-studies

ASJC Scopus subject areas

  • Information Systems
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers'. Together they form a unique fingerprint.

Cite this