SODA: Detecting COVID-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation

Jieli Zhou, Baoyu Jing, Zeya Wang, Hongyi Xin, Hanghang Tong

Research output: Contribution to journalArticlepeer-review

Abstract

Due to the shortage of COVID-19 viral testing kits, radiology is used to complement the screening process. Deep learning methods are promising in automatically detecting COVID-19 disease in chest x-ray images. Most of these works first train a Convolutional Neural Network (CNN) on an existing large-scale chest x-ray image dataset and then fine-tune the model on the newly collected COVID-19 chest x-ray dataset, often at a much smaller scale. However, simple fine-tuning may lead to poor performance due to two issues, firstly the large domain shift present in chest x-ray datasets and secondly the relatively small scale of the COVID-19 chest x-ray dataset. In an attempt to address these issues, we formulate the problem of COVID-19 chest x-ray image classification in a semi-supervised open set domain adaptation setting and propose a novel domain adaptation method, Semi-supervised Open set Domain Adversarial network (SODA). SODA is designed to align the data distributions across different domains in the general domain space and also in the common subspace of source and target data. In our experiments, SODA achieves a leading classification performance compared with recent state-of-the-art models in separating COVID-19 with common pneumonia. We also present results showing that SODA produces better pathology localizations.

Keywords

  • Adaptation models
  • COVID-19
  • Domain Adaptation
  • Feature extraction
  • Lung
  • Medical Image Analysis
  • Open Set Domain Adaptation
  • Radiology
  • Semi-Supervised Learning
  • Testing
  • X-rays

ASJC Scopus subject areas

  • Biotechnology
  • Genetics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'SODA: Detecting COVID-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation'. Together they form a unique fingerprint.

Cite this