TY - GEN
T1 - Crowdsourcing video-based workface assessment for construction activity analysis
AU - Liu, Kaijian
AU - Golparvar-Fard, Mani
PY - 2015
Y1 - 2015
N2 - Today, the availability of multiple cameras on every jobsite is reshaping the way construction activities are being monitored. Research has focused on addressing the limitations of manual workface assessment from these videos via computer vision algorithms. Despite the rapid explosion of these algorithms, the ability to automatically recognize worker and equipment activities from videos is still limited. By crowd-sourcing the task of workface assessment from jobsite videos, this paper aims to overcome the limitations of the current practice and provides a large empirical dataset that can serve as the basis for developing video-based activity recognition methods. As such, an intuitive web-based platform for massive marketplaces such as Amazon Mechanical Turk (AMT) is introduced that engages the intelligence of non-expert crowd for interpretations of selected group of frames from these videos and then it automates remaining workface assessment tasks based on the initial interpretations. To validate, several experiments are conducted on videos from concrete placement operations. The results show that engaging AMT non-experts together with computer vision algorithms can provide assessment results with an accuracy of 85%. This minimizes the time needed for workface assessment, and allows the practitioners to focus their time on the more important task of root-cause analysis for performance improvements. This platform also provides significantly large datasets with ground truth for algorithmic development purposes.
AB - Today, the availability of multiple cameras on every jobsite is reshaping the way construction activities are being monitored. Research has focused on addressing the limitations of manual workface assessment from these videos via computer vision algorithms. Despite the rapid explosion of these algorithms, the ability to automatically recognize worker and equipment activities from videos is still limited. By crowd-sourcing the task of workface assessment from jobsite videos, this paper aims to overcome the limitations of the current practice and provides a large empirical dataset that can serve as the basis for developing video-based activity recognition methods. As such, an intuitive web-based platform for massive marketplaces such as Amazon Mechanical Turk (AMT) is introduced that engages the intelligence of non-expert crowd for interpretations of selected group of frames from these videos and then it automates remaining workface assessment tasks based on the initial interpretations. To validate, several experiments are conducted on videos from concrete placement operations. The results show that engaging AMT non-experts together with computer vision algorithms can provide assessment results with an accuracy of 85%. This minimizes the time needed for workface assessment, and allows the practitioners to focus their time on the more important task of root-cause analysis for performance improvements. This platform also provides significantly large datasets with ground truth for algorithmic development purposes.
KW - Computer vision
KW - Construction productivity
KW - Crowdsourcing
KW - Workface assessment
UR - http://www.scopus.com/inward/record.url?scp=85088747402&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088747402&partnerID=8YFLogxK
U2 - 10.22260/isarc2015/0009
DO - 10.22260/isarc2015/0009
M3 - Conference contribution
AN - SCOPUS:85088747402
T3 - 32nd International Symposium on Automation and Robotics in Construction and Mining: Connected to the Future, Proceedings
BT - 32nd International Symposium on Automation and Robotics in Construction and Mining
PB - International Association for Automation and Robotics in Construction I.A.A.R.C)
T2 - 32nd International Symposium on Automation and Robotics in Construction and Mining: Connected to the Future, ISARC 2015
Y2 - 15 June 2015 through 18 June 2015
ER -