Vision-based automated bridge component recognition with high-level scene consistency

Yasutaka Narazaki, Vedhus Hoskere, Tu A. Hoang, Yozo Fujino, Akito Sakurai, Billie F. Spencer

Research output: Contribution to journalArticle

Abstract

This research investigates vision-based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high-level scene structure using limited amount of training data. To impose the high-level scene consistency, this research combines 10-class scene classification and 5-class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.

Original languageEnglish (US)
JournalComputer-Aided Civil and Infrastructure Engineering
DOIs
StateAccepted/In press - Jan 1 2019

Fingerprint

Bridge components
Earthquakes
Classifiers
Inspection
Semantics

ASJC Scopus subject areas

  • Civil and Structural Engineering
  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design
  • Computational Theory and Mathematics

Cite this

Vision-based automated bridge component recognition with high-level scene consistency. / Narazaki, Yasutaka; Hoskere, Vedhus; Hoang, Tu A.; Fujino, Yozo; Sakurai, Akito; Spencer, Billie F.

In: Computer-Aided Civil and Infrastructure Engineering, 01.01.2019.

Research output: Contribution to journalArticle

@article{dda0fd979efe46dd8b2bb08758e42097,
title = "Vision-based automated bridge component recognition with high-level scene consistency",
abstract = "This research investigates vision-based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high-level scene structure using limited amount of training data. To impose the high-level scene consistency, this research combines 10-class scene classification and 5-class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) na{\"i}ve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1{\%} of accuracy loss from the na{\"i}ve/parallel configuration for bridge images, and less than 1{\%} false positives for the nonbridge images.",
author = "Yasutaka Narazaki and Vedhus Hoskere and Hoang, {Tu A.} and Yozo Fujino and Akito Sakurai and Spencer, {Billie F.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1111/mice.12505",
language = "English (US)",
journal = "Computer-Aided Civil and Infrastructure Engineering",
issn = "1093-9687",
publisher = "Wiley-Blackwell",

}

TY - JOUR

T1 - Vision-based automated bridge component recognition with high-level scene consistency

AU - Narazaki, Yasutaka

AU - Hoskere, Vedhus

AU - Hoang, Tu A.

AU - Fujino, Yozo

AU - Sakurai, Akito

AU - Spencer, Billie F.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - This research investigates vision-based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high-level scene structure using limited amount of training data. To impose the high-level scene consistency, this research combines 10-class scene classification and 5-class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.

AB - This research investigates vision-based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high-level scene structure using limited amount of training data. To impose the high-level scene consistency, this research combines 10-class scene classification and 5-class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.

UR - http://www.scopus.com/inward/record.url?scp=85074359524&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85074359524&partnerID=8YFLogxK

U2 - 10.1111/mice.12505

DO - 10.1111/mice.12505

M3 - Article

AN - SCOPUS:85074359524

JO - Computer-Aided Civil and Infrastructure Engineering

JF - Computer-Aided Civil and Infrastructure Engineering

SN - 1093-9687

ER -