Improved object-based convolutional neural network (IOCNN) to classify very high-resolution remote sensing images

Xianwei Lv, Zhenfeng Shao, Dongping Ming, Chunyuan Diao, Keqi Zhou, Chengzhuo Tong

Research output: Contribution to journalArticlepeer-review


The land cover classification of very high-resolution (VHR) remote sensing images is a challenging task. VHR images depict many complex objects with various shapes in complicated contexts. The deep learning-based method is a solution for such dif- ficult task and feature extraction. Nevertheless, this method cannot efficiently deal with images with complex scene structures. An improved object-based convolutional neural network (IOCNN) is designed to classify VHR images with zone division and convolutional position sampling techniques in this study. The method can achieve the best performance of each zone at its own optimized scales. Based on multi-scale convolutional deep features extracted from VHR images, the objects with irregular shapes can be classified using the approach. In this study, the zone-level scale adaption and multi-scale recognition of complex objects are achieved. The performance of IOCNN is compared with the state-of-the-art methods for feature extraction, including five object-based CNN approaches and two fully convolutional networks (FCNs). The results show that the classification performance of IOCNN is considerably stronger than that of state-of-the-art methods. The overall accuracies of the land cover classification in IOCNN are 91.65% and 93.49% on two tested images. The results demonstrate the practicability of IOCNN.

Original languageEnglish (US)
Pages (from-to)8318-8344
Number of pages27
JournalInternational Journal of Remote Sensing
Issue number21
StatePublished - 2021

ASJC Scopus subject areas

  • Earth and Planetary Sciences(all)


Dive into the research topics of 'Improved object-based convolutional neural network (IOCNN) to classify very high-resolution remote sensing images'. Together they form a unique fingerprint.

Cite this