COVID-XIX-Net: Deep Learning Empirical Comparison Between X-ray Imaging and POCUS for COVID-19 Detection
Abstract
The novel COVID-19 virus has been spreading vigorously through the world starting a pandemic that was never experienced before in our modern era. It is an infectious disease caused by severe acute respiratory syndrome, and carries with it symptoms such as cough, fever and shortness of breath. In March 2020, the World Health Organization recognized COVID-19 as a pandemic, with more than 53 million cases in over 200 countries and over 1.3 million deaths since its discovery. With a limited number of test kits available worldwide and the rapid spread of the disease on a daily basis, alternative means of detection are needed. The use of X-ray imaging, CT scans, and lung point-of-care ultra- sound (POCUS) facilitated early diagnosis of COVID-19 cases. In this study, we leverage InceptionV3 and ResBlocks in building a deep convolutional neural network model, COVID-XIX-Net, to aid in the detection of COVID-19 positive cases through the detection of pneumonic patterns in chest X-ray images and ultrasound scans. COVID-XIX-Net is a multi-class classification model that classifies images into one of 3 classes: healthy, bacterial pneumonia, and COVID-19-induced pneumonic lungs. The proposed model architecture aims towards accurately diagnosing COVID-19 cases while maintain low number of parameters. COVID-XIX-Net is tested on a balanced X-ray dataset composed of 1,011 images, and an imbalanced ultrasound dataset composed of 1,103 images.
After training, cross-validation, and testing, COVID-XIX-Net achieved an accuracy of 99.9% with precision and recall of 0.99 for the X-ray dataset, and an accuracy, precision, and recall of 89.9%, 0.97 and 0.90 respectively for the case of ultrasound dataset. Results are compared against recent literature showing promising results and great potential with only 24.6M parameters. This work can be further developed and trained to assist medical practitioners in diagnosing COVID-19 cases.