TY - GEN
T1 - Characterizing Construction Equipment Activities in Long Video Sequences of Earthmoving Operations via Kinematic Features
AU - Bao, Ruxiao
AU - Sadeghi, Mohammad Amin
AU - Golparvar-Fard, Mani
N1 - Publisher Copyright:
© ASCE.
PY - 2016
Y1 - 2016
N2 - This paper presents a fast and scalable method for activity analysis of construction equipment involved in earthmoving operations from highly varying long-sequence videos obtained from fixed cameras. A common approach to characterize equipment activities consists of detecting and tracking the equipment within the video volume, recognizing interest points and describing them locally, and following by a bag-of-words representation for classifying activities. While successful results have been achieved in each aspect of detection, tracking, and activity recognition, the highly varying degree of intra-class variability in resources, occlusions and scene clutter, the difficulties in defining visually-distinct activities, together with long computational time have challenged scalability of current solutions. In this paper, we present a new end-to-end automated method to recognize the equipment activities by simultaneously detecting and tracking features, and characterizing the spatial kinematics of features via a decision tree. The method is tested on an unprecedented dataset of 5hr-long real-world videos of interacting pairs of excavators and trucks. The Experimental results show that the method is capable of activity recognition with accuracy of 88.91% with a computational time less than one-to-one ratio for each video length. The benefits of the proposed method for root-cause assessment of performance deviations are discussed.
AB - This paper presents a fast and scalable method for activity analysis of construction equipment involved in earthmoving operations from highly varying long-sequence videos obtained from fixed cameras. A common approach to characterize equipment activities consists of detecting and tracking the equipment within the video volume, recognizing interest points and describing them locally, and following by a bag-of-words representation for classifying activities. While successful results have been achieved in each aspect of detection, tracking, and activity recognition, the highly varying degree of intra-class variability in resources, occlusions and scene clutter, the difficulties in defining visually-distinct activities, together with long computational time have challenged scalability of current solutions. In this paper, we present a new end-to-end automated method to recognize the equipment activities by simultaneously detecting and tracking features, and characterizing the spatial kinematics of features via a decision tree. The method is tested on an unprecedented dataset of 5hr-long real-world videos of interacting pairs of excavators and trucks. The Experimental results show that the method is capable of activity recognition with accuracy of 88.91% with a computational time less than one-to-one ratio for each video length. The benefits of the proposed method for root-cause assessment of performance deviations are discussed.
UR - http://www.scopus.com/inward/record.url?scp=84976385765&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84976385765&partnerID=8YFLogxK
U2 - 10.1061/9780784479827.086
DO - 10.1061/9780784479827.086
M3 - Conference contribution
AN - SCOPUS:84976385765
T3 - Construction Research Congress 2016: Old and New Construction Technologies Converge in Historic San Juan - Proceedings of the 2016 Construction Research Congress, CRC 2016
SP - 849
EP - 858
BT - Construction Research Congress 2016
A2 - Perdomo-Rivera, Jose L.
A2 - Lopez del Puerto, Carla
A2 - Gonzalez-Quevedo, Antonio
A2 - Maldonado-Fortunet, Francisco
A2 - Molina-Bas, Omar I.
PB - American Society of Civil Engineers
T2 - Construction Research Congress 2016: Old and New Construction Technologies Converge in Historic San Juan, CRC 2016
Y2 - 31 May 2016 through 2 June 2016
ER -