The Biomedical Engineering Department’s founding chair at the University of Houston has reported a new deep neural network architecture capable of providing early diagnosis of systemic sclerosis (SSc). It is a rare auto-immune disease characterized by hardened or fibrous skin and internal organs. The proposed network can be implemented using a standard laptop computer with a 2.5 GHz Intel Core i7 that can immediately differentiate between healthy skin and skin images with systemic sclerosis.
According to Metin Akay, Chair Professor of biomedical engineering, the preliminary study, intended to show the proposed network architecture’s efficacy, can help in SSc characterization. Their work has been published in the IEEE Open Journal of Engineering in Medicine and Biology. The researchers believe that the proposed network architecture can easily be implemented in a clinical or hospital setting. It will help as a simple, inexpensive, and accurate screening tool for the said disease.
For patients suffering from SSc, early diagnosis is critical. Multiple studies have shown that organ involvement could occur far earlier than expected in the disease’s early phase. Still, early diagnosis and determining the extent of disease progression pose a significant challenge for doctors, even at advanced expert centers, resulting in delays in therapy and treatment.
In Artificial Intelligence, Deep Learning usually organizes algorithms into layers, i.e., the artificial neural network that can make its own intelligent decisions. The network was trained using the parameters of a mobile vision application called MobileNetV2 and pre-trained on the ImageNet dataset, which has 1.4M images to speed up the learning process. The network scans the images and learns from the existing images. It then decides which new image is normal or is in an early or late stage of the disease.
Convolutional Neural Networks (CNNs) are the most commonly used deep learning networks in engineering, medicine, and biology. Their success in biomedical applications has been limited due to the available training sets and networks’ size. The researchers used the UNet, a modified CNN architecture, with added layers to overcome these challenges.
The results displayed that the proposed deep learning architecture is superior and better than CNN’s for the classification of SSc images. After fine-tuning, the results showed that the proposed network reached 100% accuracy on the training image set and 96.8% accuracy on the validation image set, and 95.2% on the testing image set. Moreover, the training time was less than five hours. Their work can inspire the research community to explore ways to diagnose diseases using AI in a more straightforward and less costly manner.