TY - JOUR
T1 - A CNN based vision-proprioception fusion method for robust UGV Terrain classification
AU - Chen, Yu
AU - Rastogi, Chirag
AU - Norris, William R.
N1 - Funding Information:
Manuscript received February 24, 2021; accepted July 13, 2021. Date of publication August 4, 2021; date of current version August 20, 2021. This letter was recommended for publication by Associate Editor B. Duncan and Editor P. Pounds upon evaluation of the reviewers’ comments. This work was supported in part by NSF NRI-2.0. (Corresponding author: Yu Chen.) Yu Chen is with the Department of Mechanical Science & Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801 USA (e-mail: [email protected]).
Publisher Copyright:
© 2016 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this letter proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.
AB - The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this letter proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.
KW - Deep learning
KW - Field robots
KW - Machine learning
KW - Robot sensing systems
KW - Sensor fusion
UR - http://www.scopus.com/inward/record.url?scp=85112657995&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112657995&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3101866
DO - 10.1109/LRA.2021.3101866
M3 - Article
AN - SCOPUS:85112657995
SN - 2377-3766
VL - 6
SP - 7965
EP - 7972
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 9507312
ER -