TY - JOUR
T1 - MaDnet
T2 - multi-task semantic segmentation of multiple types of structural materials and damage in images of civil infrastructure
AU - Hoskere, Vedhus
AU - Narazaki, Yasutaka
AU - Hoang, Tu A.
AU - Spencer, B. F.
N1 - Publisher Copyright:
© 2020, Springer-Verlag GmbH Germany, part of Springer Nature.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - Manual visual inspection is the most common means of assessing the condition of civil infrastructure in the United States, but can be exceedingly laborious, time-consuming, and dangerous. Research has focused on automating parts of the inspection process using unmanned aerial vehicles for image acquisition, followed by deep learning techniques for damage identification. Existing deep learning methods and datasets for inspections have typically been developed for a single damage type. However, most guidelines for inspections require the identification of multiple damage types and describe evaluating the significance of the damage based on the associated material type. Thus, the identification of material type is important in understanding the meaning of the identified damage. Training separate networks for the tasks of material and damage identification fails to incorporate this intrinsic interdependence between them. We hypothesize that a network that incorporates such interdependence directly will have a better accuracy in material and damage identification. To this end, a deep neural network, termed the material-and-damage-network (MaDnet), is proposed to simultaneously identify material type (concrete, steel, asphalt), as well as fine (cracks, exposed rebar) and coarse (spalling, corrosion) structural damage. In this approach, semantic segmentation (i.e., assignment of each pixel in the image with a material and damage label) is employed, where the interdependence between material and damage is incorporated through shared filters learned through multi-objective optimization. A new dataset with pixel-level labels identifying the material and damage type is developed and made available to the research community. Finally, the dataset is used to evaluate MaDnet and demonstrate the improvement in pixel accuracy over employing independent networks.
AB - Manual visual inspection is the most common means of assessing the condition of civil infrastructure in the United States, but can be exceedingly laborious, time-consuming, and dangerous. Research has focused on automating parts of the inspection process using unmanned aerial vehicles for image acquisition, followed by deep learning techniques for damage identification. Existing deep learning methods and datasets for inspections have typically been developed for a single damage type. However, most guidelines for inspections require the identification of multiple damage types and describe evaluating the significance of the damage based on the associated material type. Thus, the identification of material type is important in understanding the meaning of the identified damage. Training separate networks for the tasks of material and damage identification fails to incorporate this intrinsic interdependence between them. We hypothesize that a network that incorporates such interdependence directly will have a better accuracy in material and damage identification. To this end, a deep neural network, termed the material-and-damage-network (MaDnet), is proposed to simultaneously identify material type (concrete, steel, asphalt), as well as fine (cracks, exposed rebar) and coarse (spalling, corrosion) structural damage. In this approach, semantic segmentation (i.e., assignment of each pixel in the image with a material and damage label) is employed, where the interdependence between material and damage is incorporated through shared filters learned through multi-objective optimization. A new dataset with pixel-level labels identifying the material and damage type is developed and made available to the research community. Finally, the dataset is used to evaluate MaDnet and demonstrate the improvement in pixel accuracy over employing independent networks.
KW - Computer vision
KW - Damage detection
KW - Multi-task learning
KW - Semantic segmentation
KW - Structural inspections
UR - http://www.scopus.com/inward/record.url?scp=85086161453&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086161453&partnerID=8YFLogxK
U2 - 10.1007/s13349-020-00409-0
DO - 10.1007/s13349-020-00409-0
M3 - Article
AN - SCOPUS:85086161453
SN - 2190-5452
VL - 10
SP - 757
EP - 773
JO - Journal of Civil Structural Health Monitoring
JF - Journal of Civil Structural Health Monitoring
IS - 5
ER -