TY - GEN
T1 - Automated worker activity analysis in indoor environments for direct-work rate improvement from long sequences of RGB-D images
AU - Khosrowpour, Ardalan
AU - Fedorov, Igor
AU - Holynski, Aleksander
AU - Niebles, Juan Carlos
AU - Golparvar-Fard, Mani
PY - 2014
Y1 - 2014
N2 - This paper presents a new method for activity analysis of construction workers using inexpensive RGB+depth sensors. This is an important task, as no current workforce assessment method can provide detailed and continuous information to help project managers identify bottlenecks affecting labor's productivity. Previous work using RGB-D images focuses on action recognition form short video sequences wherein only one action is represented within each video. Automating this analysis for long sequences of RGB-D images is challenging because the start and the end of each action is unknown, recognizing single actions is still challenging, and there are no data sets and validation metrics to evaluate algorithms. Given an input sequence of RGB-D images, our algorithm divides it into temporal segments and automatically classifies the observed actions. To do so, the algorithm first detects body postures in real time. Then a kernel density estimation (KDE) model is trained to model classification scores from discriminatively trained bag-of-poses action classifiers. Further, a hidden Markov model (HMM) labels sequences of actions that are most discriminative. The performance of our model is tested on unprecedented data sets of actual drywall construction operations. Experimental results, in addition to the perceived benefits and limitations of the proposed method, are discussed in detail.
AB - This paper presents a new method for activity analysis of construction workers using inexpensive RGB+depth sensors. This is an important task, as no current workforce assessment method can provide detailed and continuous information to help project managers identify bottlenecks affecting labor's productivity. Previous work using RGB-D images focuses on action recognition form short video sequences wherein only one action is represented within each video. Automating this analysis for long sequences of RGB-D images is challenging because the start and the end of each action is unknown, recognizing single actions is still challenging, and there are no data sets and validation metrics to evaluate algorithms. Given an input sequence of RGB-D images, our algorithm divides it into temporal segments and automatically classifies the observed actions. To do so, the algorithm first detects body postures in real time. Then a kernel density estimation (KDE) model is trained to model classification scores from discriminatively trained bag-of-poses action classifiers. Further, a hidden Markov model (HMM) labels sequences of actions that are most discriminative. The performance of our model is tested on unprecedented data sets of actual drywall construction operations. Experimental results, in addition to the perceived benefits and limitations of the proposed method, are discussed in detail.
UR - http://www.scopus.com/inward/record.url?scp=84904671283&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84904671283&partnerID=8YFLogxK
U2 - 10.1061/9780784413517.0075
DO - 10.1061/9780784413517.0075
M3 - Conference contribution
AN - SCOPUS:84904671283
SN - 9780784413517
T3 - Construction Research Congress 2014: Construction in a Global Network - Proceedings of the 2014 Construction Research Congress
SP - 729
EP - 738
BT - Construction Research Congress 2014
PB - American Society of Civil Engineers
T2 - 2014 Construction Research Congress: Construction in a Global Network, CRC 2014
Y2 - 19 May 2014 through 21 May 2014
ER -