TY - JOUR
T1 - Human-object interaction recognition for automatic construction site safety inspection
AU - Tang, Shuai
AU - Roberts, Dominic
AU - Golparvar-Fard, Mani
N1 - Funding Information:
The authors would like to thank Kevin Shih, Derek Hoiem, and RAAMAC lab students for their suggestions and support to this paper. This material is based in part upon work supported by the National Science Foundation (NSF) under Grants CMMI 1446765 and CMMI 1544999 . Any opinion, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.
Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2020/12
Y1 - 2020/12
N2 - Today, computer vision object detection methods are used for safety inspections from site videos and images. These methods detect bounding boxes and use hand-made rules to enable personal protective equipment compliance checks. This paper presents a new method to improve the breadth and depth of vision-based safety compliance checking by explicitly classifying worker-tool interactions. A detection model is trained on a newly constructed image dataset for construction sites, achieving 52.9% average mean precision for 10 object categories and 89.4% average precision for detecting workers. Using this detector and new dataset, the proposed human-object interaction recognition model achieved 79.78% precision and 77.64% recall for hard hat checking; 79.11% precision and 75.29% recall for safety coloring checking. The new model also verifies hand protection for workers when tools are being used with 66.2% precision and 64.86% recall. The proposed model is superior in these checking tasks when compared with post-processing detected objects with hand-made rules, or applying detected objects only.
AB - Today, computer vision object detection methods are used for safety inspections from site videos and images. These methods detect bounding boxes and use hand-made rules to enable personal protective equipment compliance checks. This paper presents a new method to improve the breadth and depth of vision-based safety compliance checking by explicitly classifying worker-tool interactions. A detection model is trained on a newly constructed image dataset for construction sites, achieving 52.9% average mean precision for 10 object categories and 89.4% average precision for detecting workers. Using this detector and new dataset, the proposed human-object interaction recognition model achieved 79.78% precision and 77.64% recall for hard hat checking; 79.11% precision and 75.29% recall for safety coloring checking. The new model also verifies hand protection for workers when tools are being used with 66.2% precision and 64.86% recall. The proposed model is superior in these checking tasks when compared with post-processing detected objects with hand-made rules, or applying detected objects only.
KW - Computer vision
KW - Construction management
KW - Human-object interaction
KW - Safety inspections
UR - http://www.scopus.com/inward/record.url?scp=85088915044&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088915044&partnerID=8YFLogxK
U2 - 10.1016/j.autcon.2020.103356
DO - 10.1016/j.autcon.2020.103356
M3 - Article
AN - SCOPUS:85088915044
SN - 0926-5805
VL - 120
JO - Automation in Construction
JF - Automation in Construction
M1 - 103356
ER -