A CNN based vision-proprioception fusion method for robust UGV Terrain classification

Yu Chen, Chirag Rastogi, William R. Norris

Research output: Contribution to journalArticlepeer-review

Abstract

The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this letter proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.

Original languageEnglish (US)
Article number9507312
Pages (from-to)7965-7972
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number4
DOIs
StatePublished - Oct 2021

Keywords

  • Deep learning
  • Field robots
  • Machine learning
  • Robot sensing systems
  • Sensor fusion

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A CNN based vision-proprioception fusion method for robust UGV Terrain classification'. Together they form a unique fingerprint.

Cite this