홈으로ArticlesAll Issue
ArticlesAn IoMT-Based Federated and Deep Transfer Learning Approach to the Detection of Diverse Chest Diseases Using Chest X-Rays
  • Barkha Kakkar1, Prashant Johri1, Yogesh Kumar2, Hyunwoo Park3, Youngdoo Son3, and Jana Shafi4,*

Human-centric Computing and Information Sciences volume 12, Article number: 24 (2022)
Cite this article 1 Accesses
https://doi.org/10.22967/HCIS.2022.12.024

Abstract

Since chest illnesses are so frequent these days, it is critical to identify and diagnose them effectively. As such, this study proposes a model designed to accurately predict chest disorders by analyzing multiple chest x-ray pictures obtained from a dataset, consisting of 112,120 chest X-ray images, obtained the National Institute of Health (NIH) X-ray. The study used photos from 30,805 individuals with a total of 14 different types of chest disorder, including atelectasis, consolidation, infiltration, and pneumothorax, as well as a class called “No findings” for cases in which the ailment was undiagnosed. Six distinct transfer-learning approaches, namely, VGG-16, MobileNet V2, ResNet-50, DenseNet-161, Inception V3, and VGG-19, were used in the deep learning and federated learning environment to predict the accuracy rate of detecting chest disorders. The VGG-16 model showed the best accuracy at 0.81, with a recall rate of 0.90. As a result, the F1 score of VGG-16 is 0.85, which was higher than the F1 scores computed by other transfer learning approaches. VGG-19 obtained a maximum rate of accuracy of 97.71% via federated transfer learning. According to the classification report, the VGG-16 model is the best transfer-learning model for correctly detecting chest illness.


Keywords

Deep Learning, Chest Diseases, Federated Learning, Disease Prediction, X-Ray Dataset


Introduction

The chest or thorax is situated between the neck and abdomen, and is one of the three main parts of a human body. It comprises the heart, lungs, muscles, and many other regional compositions. Some diseases or infections affect the chest area, such as atelectasis, consolidation, and pleural thickening among others. X-rays of the chest are mainly taken to detect such infections [1]. Such X-rays may show cavities, filtrates, nodules and the like, thereby helping to diagnose chest disease. Chest diseases such as pneumonia, asthma, and lung diseases are serious health disorders that can have grave negative effects on human health. Detecting a chest infection is a laborious task; consequently, researchers have proposed various kinds of systems for this purpose [2], and many techniques and algorithms have been applied to detect diverse types of chest infections. Previously, machine learning (ML) was widely in the medical field to detect health diseases. ML techniques have a significant classification characteristic classifying new cases or observations based on previous ones. The type of disease and its severity are measured using various ML techniques such as naive Bayes, decision trees, k-nearest neighbor, and support vector machine (SVM) among others [3]. Deep learning represents the next major advance in the detection of diseases. A deep learning subset of ML can be used to train artificial intelligence machines to predict outputs and extract data patterns using an artificial neural network. It is being applied in diverse fields such as defense, security, voice recognition, face recognition, disease detection, and so forth [4].
Deep learning focuses mainly on several modifications, feature selection, and resizing made to pre-process the inputs [5, 6]. Deep learning is very popular in medical diagnosis as it extracts valuable characteristics from input images [7]. Another technique explored in the medical diagnosis of diseases is transfer learning or fine-tuning. It is also a type of machine learning that reuses a model previously developed for certain tasks. The reusable model is already pre-trained to classify certain features that help to predict the disease easily and quickly. These models need not be trained again, as they are already fine-tuned for medical applications. Their use in the medical field consists in detecting planes in ultrasounds, and classifying lung diseases and types of chest infections [8,9]. Federated learning is a theory presented by Google to construct models based on ML that could work on multiple devices adhering to the privacy of data [10, 11]. It is an up-graded form of artificial intelligence based on the central idea of ensuring users’ data privacy. A globally shared model is used in the device so that it trained collectively, thus providing proper security. As stated in [12], there are three types of federated learning, as follows: horizontal or sample-based federated learning in which all devices have the same features of data; vertical federated learning, which involves different data comprising varying features to train a model collectively; and federated transfer learning (FTL), which solves a new problem by utilizing the transfer learning features obtained by applying an already trained model for another task. On the other hand, wearable gadgets have grown in popularity recently, with a wide range of applications in health monitoring systems, resulting in the growth of the “Internet of Medical Things” (IoMT). The IoMT has a critical role to play in lowering death rates by detecting chest illnesses early [13, 14]. It comprises various items of medical equipment connected to a healthcare provider’s computer system via the Internet, which are capable of producing, storing, analyzing, and disseminating health data [15]. Wearables, remote patient monitoring, sensor-enabled beds, infusion pumps, and health tracking devices are all IoMT items. The purpose of the IoMT is to improve the satisfaction of both patients and healthcare providers, and to ensure the quality of treatment.
The model proposed in this study employs deep and FTL techniques to detect chest diseases through X-rays images. The first step performed by the system is data pre-preprocessing, which involves loading and cleaning data and converting them into a more intelligible form. This step is followed by exploratory data analysis, which is used to categorize and study data based on various attributes, such as gender, age, single or multiple diseases. Furthermore, several features such as width, area, perimeter, epsilon, height, solidity, etc., are extracted for each disease. Training and testing of the data are followed by data augmentation in order to make the images more transparent. Finally, various deep and federated transfer-learning models, such as VGG-16, MobileNet V2, ResNet-50, DenseNet-161, Inception V3, and VGG-19, make disease predictions. Keras plus TensorFlow is used for deep learning, while federated learning is employed on PyTorch technology. For the analysis conducted in this study, a dataset named “National Institutes of Health (NIH) [16] X-ray dataset” and comprising 112,120 images of various chest infections was used. In addition, there are disease labels from approximately 30,805 patients and a total of 14 classes of diseases such as atelectasis, consolidation, infiltration, pneumothorax, etc., and a class named “No findings” for cases where a disease remained undetected. The rest of the paper is arranged as follows: Section 2 discusses previous literature in the relevant research field; Section 3 describes the dataset, techniques, and libraries used; Section 4 presents the applied models and the results; and, finally, Section 5 discusses the future scope of the work.


Related Work

Several recent studies focused on the identification of various forms of chest infections. All such works employ a distinctive methodology and are increasingly reliant on other learning strategies. Some of them are included in the section on related works. Deep learning is a type of machine learning technology designed to mimic the behavior of the human brain. Because they are constantly evaluating data, these algorithms attempt to emulate and consider aspects of human behavior [6]. Deep learning algorithms make use of a multi-layered architecture composed of several levels. Recently, there has been a significant increase in the use of deep learning to forecast many types of infections and disorders. Deep learning combined with big data is being used to forecast infectious illnesses, according to a study proposal by Chae et al. [17]. The deep neural network (DNN), long short-term memory (LSTM) models, autoregressive integrated moving average (ARIMA), and ordinary least squares (OLS) approach are used on a variety of various forms of data, including weather data, twitter data, and non-clinical search data, in order to forecast illnesses. The results showed that the DNN models were superior to the others in terms of average best performance, but the LSTM model was more accurate in predicting when an infectious illness was spreading. The DNN and LSTM models also outperformed the ARIMA model in terms of accuracy. Abuhamdah et al. [3] presented a hybrid predictive model comprising a convolution neural network using WekaDeepLearning. The hybrid model was used to detect pneumonia and other lung diseases using chest X-ray images or computed tomography (CT) images. In a nutshell, their study showed the capability and effectiveness of the classifier in detecting positive cases through an experimental performance. Kishor et al. [18] worked on improving the quality of service provided by a heterogeneous network using a “reinforcement learning-based multimedia data segregation” (RLMDS) algorithm and a “computing QoS in medical information system using fuzzy” (CQMISF) algorithm in fog computing. The authors’ main aim was to use the proposed algorithms to classify data and transfer the classified high-risk data to the end-users by selecting the optimal gateway. Bhattacharyya et al. [19] presented a study aimed at classifying chest-related diseases such as pneumonia and COVID-19 (coronavirus disease 2019) with normal X-ray images. The authors used a conditional generative adversarial network to segment the images and deep learning models in order to extract discriminatory features. Convolutional neural network (CNN), Back-propagation neural networks (BPNNs) with supervised learning, and competitive neural networks with unsupervised learning are all types of neural networks used to diagnose pulmonary disorders [20]. It has been demonstrated that CNN can perform classification better than other models thanks to its extensive deep structure, which can extract features at various levels of abstraction and complexity. In addition, this model has a high rate of identification of its features.
Transfer learning is another technology that is becoming increasingly popular in the diagnosis of illnesses [21]. When it comes to transferring learning, the theory behind it holds that a model previously established for a specific job may be used as a starting point. To give an example, the study by Hon and Khan [10] resulted in the development of a categorization system for Alzheimer’s disease. Using pre-trained weights, models such as Inception V4 and VGG-16 were deployed on big datasets, and the image entropy method was used to choose the most informative images for training purposes. It was intended to alleviate the limitation of having to train existing algorithms on many photos. VGG-16 with transfer learning and Inception V4 produced accuracy rates of 92.3% and 96.25%, respectively, demonstrating that just a small number of training pictures is required to obtain correct results in this study. Muniasammy et al. [9] proposed a deep learning-based model to diagnose chest disease information provided in the form of images as well as medical reports. The main purpose of this study was that that model should be able to automatically detect chest diseases from various chest X-ray-based images using various class labels. Kishor et al. [22] used six ML algorithms, namely, decision tree, SVM, naive Bayes, random forest, artificial neural network, and k-nearest neighbor, to detect nine severe diseases including heart disease, diabetics, breast cancer, hepatitis, liver disorder, dermatology, surgery data, etc. While applying all these algorithms, the random forest classifier observed a maximum accuracy rate of 97.62%, sensitivity of 99.67%, specificity of 97.81%, and AUC (area under the curve) of 99.32% for different diseases. In the study by Vogado et al. [23], transfer learning was utilized in CNNs and SVMs to create a system for diagnosing leukemia. With this method, an input picture is sent to the CNN for feature extraction; the gain ratio is utilized to pick features; and, lastly, the SVM is employed as a classifier for classification. There is a significant distinction between the suggested technique and current state-of-the-art methods in that the input photos are used directly without any pre-processing being performed. Therefore, it is not necessary to go through the process of segmentation. Chest infections can manifest themselves in a variety of ways, including consolidation, pleural thickening, and cardiomegaly among others. Table 1 shows a list of some extant works that are relevant to these topics [2436]. Deep transfer learning (DTL) has proven to be extremely effective in detecting a wide range of illnesses and disorders. Pathak et al. [37] presented a DTL-based approach to categorizing COVID-19, among other illnesses. A chest CT dataset is inputted into a ResNet-50 network, and is then used to perform feature extraction on the dataset. To forecast whether COVID-19 is positive or negative in the input sample, transfer learning uses these characteristics, which are used as parameters in deep CNN. The model’s testing accuracy is 93.01%, making it a viable alternative to a COVID-19 testing kit in some situations. A DTL-based model for predicting COVID-19 in chest X-ray images was also developed by Minaee et al. [38] in a similar fashion. Transfer learning is used to train four CNN models, namely, ResNet-18, ResNet-50, SqueezeNet, and DenseNet-121, of which ResNet-18 is the most widely used CNN model. Data augmentation techniques such as flipping, rotation, and other image manipulation techniques change pictures to increase the number of samples. Fine-tuning is performed on the last layer of the pre-trained model using ImageNet. According to the data, the sensitivity rate is 98%, and the average specificity rate is 90%. A deep CNN with transfer learning was also proposed by Rahman et al. [39] to detect pneumonia. Four pre-trained deep CNN models, i.e.,AlexNet, ResNet-18, DenseNet-201, and SqueezeNet, were used in this study as follows: initially, the system takes X-ray pictures from the X-ray machine and stores them, after which they are sent to the ML block, which pre-processes (resizing and normalization) and augments the data (rotation, scaling, translation). The output of the pre-trained models is characterized as follows: regular pneumonia, bacterial pneumonia, or viral pneumonia. DenseNet-201 exceeds all other deep CNN networks in terms of performance. Sahaand Rahman [5] applied the convolution neural network method to predict the presence of pneumonia using chest X-ray images. Their model showed a rate of accuracy of 89%, which was better than the existing deep learning-based clinical image classification algorithms.

In the survey of previous papers, it was observed that the categorization of chest-based illness was performed using limited or restricted datasets. However, in this study a large dataset, composed of 112,120 [16] chest X-ray images, was obtained in order to classify each chest related disease.

Comparing with state-of-the-art research efforts, this study integrated various pre-transfer learning models with a deep and federated learning mechanism in order to make a fair comparison and obtain better performance. Hence, for the fourteen distinct types of chest X-ray disorders, classifications were done using the following six transfer learning models: VGG-16, MobileNetV2, ResNet-50, DenseNet-161, Inception V3, and VGG-19.



Table 1. Related work for different types of chest infections
Study Chest infection type Dataset Approach used Results
Liu et al. [24] Atelectasis 130 patients of Beijing Military General Hospital Deep neural network Sensitivity:
Ultrasound = 100%
Chest X-ray = 75%
Ullmann et al. [25] Atelectasis 40 children affected by neuromuscular disease Neural network LUS:
Specificity = 82%
Sensitivity = 57%
Positive predictive value = 80%
Negative predictive value = 61%
Behzadi-Khormouji et al. [26] Consolidation Pediatric chest X-ray dataset and ImageNet VGG-16, DenseNet-121, ChestNet, PyramidCNN Accuracy = 94.67%
The ChestNet2 model outperformed the other five models.
Na'am et al. [27] Infiltration X-ray images of infants treated at Central Public Hospital (RSUP), Indonesia Morphological operations, edge detection, and sharpening of edges. Output images showed clearer edges and easy recognizable information. 
Gooßen et al. [28] Pneumothorax 1,003 chest X-ray images CNN, multiple-instance learning (MIL), fully convolutional networks (FCN). AUC:
CNN = 0.96
MIL = 0.93
FCN = 0.92
Chan et al. [29] Pneumothorax 32 pneumothorax and 10 normal chest radiographs from Chung Shan Medical University Hospital, Taiwan Support vector machines (SVMs) Average accuracy = 82.2%
Local binary pattern (LBP)
Campo et al. [30] Emphysema 7,377 images 11-layer CNN using percentage of low-attention lung areas (LAA, %) Mean error = 3.96
AUC accuracy = 90.73%
Mean sensitivity = 85.68%
Jain et al. [31] Pneumonia Chest X-ray images dataset: 5,216 training and 624 testing images Six models: first and second model consisting of two and three convolutional layers, respectively, VGG-16, VGG-19, ResNet-50, and Inception V3. Validation accuracy of first model (85.26%), second model (92.31%)
Accuracy of other four models:
VGG-16 = 87.28%
VGG-19 = 88.46%
ResNet-50 = 77.56%
Inception-V3 = 70.99%,
Hashmi et al. [32] Pneumonia 700 testing set images, pneumonia dataset ResNet-18, DenseNet-121, Inception V3 Validation accuracy = 98.43%.
Saito et al. [33] Pleural thickening 28,727 chest X-rays of students and employees at the University of Tokyo Two-tailed Student t-tests, the chi-square test, and binary logistic regression More than 90% of the cases were defined as pulmonary apical cap. Frequency increased with age; more prevalent in males and smokers. Also, persons with low body weight along with tall height were more prone to this.
Alghamdi et al. [34] Cardiomegaly 59 patients, Abdul-Aziz Hospital, Jeddah, Saudi Arabia Cardio-thoracic ratio (CTR) 21 patients with cardiomegaly; patients aged 37–58 were the most affected; more prevalent among males.
Candemir et al. [35] Cardiomegaly NLM-Indiana Collection and NIH-CXR dataset Pre-trained models are used: fine-tuning and CXR-based. Accuracy = 89.86%
Sensitivity = 88.81%
Specificity = 90.91%
Liang et al. [36] Nodulo mass 100 patients, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan Heat map, abnormal probability, nodule probability, mass probability. Detection performance with 76.6% sensitivity and 88.68% specificity.


Methodology

This work uses a novel methodology in which a transfer-learning model is combined with deep and federated learning to classify chest diseases. Initially, the chest data were collected in the form of images from NIH chest X-rays, which were then pre-processed to clean the data, match then with the .csv dataset, find the NAN values, and encode the data. Further, the preprocessed data were visualized and summarized graphically to assist with feature extraction and to extract contour features such as area, perimeter, aspect ratio, solidity, etc.
Later data were split into training and testing datasets (75% and 25%, respectively), which were then further augmented by various techniques such as flipping, rotation, etc. Finally, pre-trained models such as VGG-16, MobileNet V2, ResNet-50, DenseNet-161, Inception V3, and VGG-19 were used to perform classification, which was further evaluated using the precision rate, recall rate, and AUC & F1 score, as shown in Fig. 1.

Fig. 1. Proposed framework for the detection of chest diseases.


Dataset Description
Chest X-rays are one of the most frequent and cost-effective types of medical-imaging examinations available. However, clinical diagnosis of a chest X-ray can be challenging, and is sometimes more difficult than diagnosis by chest-based CT imaging. There is a lack of publicly available datasets for chest X-rays. Previously, the most popular publicly available source of chest X-ray images were taken that comprised of 4,413 images. Hence, this study used a dataset named the “NIH chest X-rays,” which comprises of 112,120 images along with the disease labels of 30,805 people. These labels were created using natural language processing (NLP) to derive high-quality information from the classifications of diseases [16].
The dataset consists of 112,120 images with a size of 1024×1024. Its folder contains images_001 to images_012 sequentially, each with 10,000 images, with the first one containing 4,999 images and the last one 7,121 images. A BBoxlist2017.csv file comprising the coordinates of the bounding box with the following attributes: Image Index, Classlabel, Bbox_x, Bbox_y, Bbox_w, Bbox_h [16]. Dataentry2017.csv file with class labels and information about the patients consists of Image Index which contains the file name, Class label to show the types of disease, Follow up, Patient ID, Age, Gender, X-ray orientation, image width and height, OriginalImagePixelSpacing_x, and OriginalImagePixelSpacing_y. These chest X-ray images can be classified into 14 disease classes and last class, as “No Finding” with 31,167 images such as there are 15 classes (14 diseases, and one for “No findings”). Images can be classified either as “No findings” or into one or more classes of disease, such as atelectasis, consolidation, infiltration, pneumothorax, edema, emphysema, fibrosis, effusion, pneumonia, pleural thickening, cardiomegaly, nodule mass, and hernia. Some of these chest X-ray images are shown in Fig. 2. There are also certain limitations, for example, the image labels are NLP extracted, so there may be some erroneous labels, although the NLP labeling accuracy is estimated to be >90%; however, the very limited numbers of disease region bounding boxes and chest X-ray radiology reports are not expected to be publicly shared. Parties that use this public dataset are encouraged to share their “updated” image labels and/or new bounding boxes in their own studies later, perhaps through manual annotation.

Fig. 2. Chest X-ray images.


Platforms Used
The Keras platform runs deep learning models, whereas PyTorch platforms are used for federated learning-based transfer learning techniques. Keras is a high-level API with a more readable and concise architecture that provides a framework for rapidly designing, evaluating, and training new models. It also has multiple back-end support [38]. As for TensorFlow, it is both a high and low-level API that works on object detection functionality at a very high speed. The system uses the flexibility of tools, libraries, and resources to facilitate scalable production, deployment, and multiple abstraction levels. Algorithms are presented as static computational graphs, which makes them suitable for dataflow programming [40, 41]. Meanwhile, PyTorch is a low-level framework integrated with Python, with good debugging capabilities. Dynamic computation graphs are utilized to provide high speed and a short training duration. A reference-counting scheme is used to keep track of the uses of every sensor so as to free up the memory when the count reaches zero [18].

Library Used
This study used the Sklearntrain_test_split library, which is the most useful, strongest library comprising ML and statistical modeling tools in Python. The train_test_split library is used to split arrays or matrices into training and testing subsets randomly [42].

Data Pre-processing
The initial step consists in performing pre-processing operations on the chest X-ray images, with the aim of improving their quality. The dataset is first loaded into the system, and then converted to text for every column by matching images with corresponding CSVs using image index numbers. Further, the NAN or missing values are identified and removed from the dataset by replacing them with any parameter calculated, such as mean, mode, etc. The drop column can also be used if a column of the dataset remains unused. Finally, the data are encoded to remove any categorical variables. Using data visualization techniques, the proposed system uses EDA to analyze, study, and examine datasets in order to determine their main features [19]. Initially, the system studies the occurrence of chest diseases by gender.

Feature Extraction
The proposed system performs feature extraction to reduce the number of features in a dataset by designing new features from the existing ones, while removing the original features. These newly designed features comprise all the information of the original set of features [43]. Since .csv dataset images have only height and width values, several other contour features are extracted. These features further assist in performing classification more accurately. The proposed work computes several contour features for each category of chest disease, as shown in Fig. 3. One such parameter is the area that is covered by a shape or the measure of the space inside that shape. Likewise, perimeter is used to calculate the distance covered by the input image. In addition, the aspect ratio is another contour feature that gives the ratio of width to height of the image. Solidity is another contour feature, which is explained as a ratio of the contour area to the convex hull area. Mean intensity is used to find the mean intensity of all the pixels in the image. Similarly, the maximum value plus minimum value and their respective locations. Moreover, the extent of the image is calculated as the ratio of the area of the shape to the area of the bounding box. Using these contour values, extreme points were constructed to form the continuous curve on the picture, so that it could be cropped to acquire the desired feature. Then, the particular characteristic derived from numerous images of chest illnesses was further used for data augmentation.

Fig. 3. Feature extraction.


Data Augmentation
Several data augmentation techniques were applied in the proposed system to make the data suitable for feeding into the network. The proposed system initially set samplewise_center to true and assigned every sample mean to the value 0. Next, samplewise_std_normalization was set to true, which divides each input by its std. Also, horizontal_flip was assigned the value true, while vertical_flip was assigned false, which randomly flipped the input in the horizontal direction. Further, height_shift_range was set to 0.05 and width_shift_range to 0.1. The next step consisted in assigning rotation_range to 5, the number of random rotations to occur, followed by shear_range, which is the shear intensity set to 0.1. The fill_mode was assigned the value “reflect” which fills the points outside the boundary according to this value. Finally, a zoom_range defined as a range of random zoom was set to 0.15. These data augmentations were applied to render the chest disease clearer and more diagnosable, as shown in Fig. 4.

Fig. 4. Original images with the corresponding augmented images.


Model Selection

The proposed system utilizes transfer learning to apply already pre-trained models, helping to reduce the training time. These models can be used as an approach to solving a new problem. A pre-trained model can be directly embedded into an application or used as a model to extract features for classification. Several models perform well in the image classification process, and some are listed in the following section.

VGG-16
Designed by Oxford’s Visual Geometry Group (VGG), VGG-16 is a deep CNN network comprising 16 layers [44]. It is trained on a database named “ImageNet” and inputs an image sized 224×224, which moves through a group of convolutional layers. It employs fully connected layers later to perform the classification step.

MobileNet V2
MobileNetV2 is a CNN framework that is an upgraded form of MobileNet V1, which was designed for mobile devices. It is based on an inverted new structure in which the residual links are located between the bottleneck layers. It uses depth-wise and lightweight convolutions to extract features. It is 51 layers deep, with 32 fully convolution layers, and 19 residual bottleneck ones [45].

ResNet-50
ResNet-50 stands for residual network, and is 50-layers deep. The ResNet networks tend to learn the features from the unused parts of the previous layer, helping to improve accuracy. This model comprises five stages having a convolution block and an identity block, each with three convolution layers [46].

DenseNet-161
DenseNet is another term for “dense convolutional networks,” which showed the best rates of accuracy in classifying image datasets, i.e., CIFAR-10 and ImageNet. It is designed so that each layer is connected to the later layers using dense connections, which assists with sharing the extracted features and other information [47]. For example, DenseNet-161 comprises four dense blocks that connect each layer to every other layer. Thus, all preceding layer’s feature maps are fed to every succeeding layer via the connection between the two blocks [48].

Inception V3
Inception V3 is a 48-layer deep network designed by Google. In the Inception architecture, the fully connected layers are discarded, while the pooling layer averages the feature maps, connecting them with the softmax layer to perform classification. The use of fewer parameters results in less overfitting, making the network more efficient [49].

VGG-19
As its name implies, VGG-19 is a 19-layer deep CNN architecture. It consists of five blocks, each containing a few convolution layers and a max-pooling layer. This network has been given a fixed size (224×224) RGB picture as the input, indicating that the matrix of size (224,4,3) [46]. Pre-processing was only used to remove the mean RGB value from each pixel, which was computed throughout the whole training set. Kernels were used with a size of (3×3) and a stride size of 1 pixel to cover the entire picture concept. To maintain the image’s spatial resolution, spatial padding can be applied. With sride 2, maximum pooling can be achieved over a 2×2-pixel window. This was followed by the rectified linear unit (ReLu) to incorporate non-linearity so as to increase the model classification and computing time, since earlier models employed tanh or sigmoid functions, and this model proved to be far superior to them. Three fully connected layers were built, the first two of which were sized 4,096, followed by a layer with 1,000 channels for 1,000-way ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) classification, and finally a softmax function [50].

Results and Discussion
This section provides a detailed analysis of the proposed system of disease prediction, and evaluates the system using several metrics, and classification reports.

Evaluation metrics
The performance of the proposed system is evaluated through several metrics such as [13, 4951].

Accuracy: It is measured as the ratio of correct predictions to the total number of instances assessed. It is mathematically defined in Equation (1):

$Accuracy= (TP+TN)/ (TP+FP+TN+FN)$(1)

Loss: The difference between the ground truth and the predicted value is defined as Loss.
RMSE (root mean square error): It is a method of calculating error in a model while making predictions. Mathematically, Equation (2) defines:

$RMSE= \sqrt{\frac{∑_{i=1}^N (x_i-\hat x_i)^2}{N}}$(2)

Here $i$ is a variable, $N$ is non missing data points, $x_i$ is actual observations of time series, and $\hat x_i$ is estimated time series.
Precision: It calculates the positive instances correctly predicted that belong to the positive class [51]. It can be calculated as shown in Equation (3):

$Precision= TP/(TP+FP)$(3)

Recall: It can be defined as the ratio of positive instances that are accurately predicted. Mathematically, it is calculated by using Equation (4):

$Recall= TP/(TP+TN)$(4)

F1 score: It is the weighted ratio of precision and recall. A high F1 score indicates that the model has a good classifying ability [52]. It can be calculated by using Equation (5):

$F1= (2×precision×recall)/(precision+recall)$(5)

AUC: Area under curve is defined as the definite integral of the curve that describes the variation of expressions in a human being as a function of time. Its mathematical formula is shown in Equation (6):

$A= \lim\limits_{x→∞} ⁡\displaystyle\sum_{i=1}^n f(x)$(6)

Here, TP stands for true positive, TN stands for true negative, FP stands for false positive, and FN stands for false negative.

Performance evaluation of different models
Deep learning: Table 2 depicts the average best values of the transfer learning models corresponding to the respective evaluation parameters. The parameters calculated are accuracy, loss, and RMSE value for training, as well as the validation phase. The proposed system aims to diagnose a chest infection in a patient through observations of a chest X-ray. It can be seen that, while using deep learning techniques, DenseNet-161 obtained the highest accuracy at 88.01%, while Inception V3 computed the lowest loss and RMSE value at 0.324 and 0.569, respectively, during the training phase. Similarly, during the validation phase, MobileNetV2 obtained the highest accuracy at 88.09%, while Inception V3 computed the lowest loss and RMSE at 0.321 and 0.566, respectively.

Table 2. Evaluation of deep learning algorithms
Model Training Validation
Accuracy (%) Loss RMSE Accuracy (%) Loss RMSE
VGG-16 87.33 0.447 0.668 86.43 0.72 0.848
MobileNet V2 85.46 0.468 0.684 88.09 0.582 0.762
ResNet-50 87.46 0.41 0.64 87.41 0.355 0.595
DenseNet-161 88.01 0.561 0.748 88 0.456 0.675
Inception V3 87.98 0.324 0.569 87.94 0.321 0.566
Values in the bold font are the best values obtained by the transfer learning models.

Federated learning: Table 3 depicts the average best values of the transfer learning models corresponding to the respective evaluation parameters using federated-learning techniques. FTL is used to solve privacy and other issues, such as traffic flow prediction, prevention of data attacks, and the handling of a vast amount of data, while DTL only deals with a large amount of data. The parameters calculated are accuracy, loss, and RMSE value for training, as well as the validation phase. During the training phase, Inception V3 obtained the highest accuracy at 94.56%, while MobileNetV2 computed the lowest loss and RMSEat 0.126 and 0.355, respectively. Likewise, during the validation phase, Inception V3 showed the highest accuracy, lowest loss, and RMSEat 94.90%, 0.185, and 0.431, respectively.

Table 3. Evaluation of federated learning algorithms
Model Training Validation
Accuracy (%) Loss RMSE Accuracy (%) Loss RMSE
VGG-16 78.65 0.127 0.356 76.71 0.213 0.461
MobileNet V2 79.23 0.126 0.355 79 0.194 0.44
ResNet-50 79.29 0.186 0.431 79.07 0.194 0.44
DenseNet-161 80.02 0.186 0.431 82.71 0.193 0.44
Inception V3 94.56 0.187 0.432 94.9 0.185 0.431
Values in the bold font are the best values obtained by the transfer learning models.

Parameters for deep learning: Table 4 shows that the VGG-16 model had the best precision rate, as well as the highest ability to find all relevant cases defined as recall rate. Thus, the F1 score of VGG-16 is automatically the highest. Hence, the classification report states that the VGG-16 model classifies chest infections precisely and efficiently, making it the best possible transfer learning model for this purpose.

Table 4. Performance of deep transfer learning algorithms
Model Precision Recall F1 score

VGG-16

0.81 0.90 0.85

MobileNet V2

0.79 0.89 0.84

ResNet-50

0.76 0.73 0.76

DenseNet-161

0.80 0.78 0.84

Inception V3

0.78 0.88 0.82

VGG-19

0.78 0.88 0.83


Computational time: Table 5 presents the time taken by each model to process the image data while executing the data for predicting chest diseases. On analyzing the table, it has been observed that DenseNet-161 took the least time, i.e., 24,645 seconds, VGG-16 took the most time at 37,831seconds.

Table 5. Computation time of algorithms
Algorithm Computation time (s)
VGG-16 37,831
VGG-19 36,910
MobileNet V2 27,958
ResNet-50 29,150
DenseNet-161 24,645
Inception V3 35,284


Disease prediction by transfer learning models: Fig. 5 shows various chest X-ray images for several chest diseases, while Table 6 depicts the predictions of each transfer learning model based on deep and federated learning techniques concerning these images. It is inferred that each model classifies the disease accurately.

Fig. 5. Actual chest disease images: (a)atelectasis, (b)effusion, (c)mass, (d)pneumothorax, (e)infiltrationand (f)mass/infiltration.


Table 6. Disease prediction using transfer learning algorithms
Model Image Predicted disease
Deep learning Federated learning
VGG-16 Fig. 5(a) Atelectasis Atelectasis
VGG-19 Fig. 5(b) Effusion Effusion
MobileNet V2 Fig. 5(c) Mass Mass
ResNet-50 Fig. 5(d) Pneumothorax Pneumothorax
DenseNet-161 Fig. 5(e) Infiltration Infiltration
Inception V3 Fig. 5(f) Mass, Infiltration Mass, Infiltration
Comparison with previous studies: Table 7 presents a comparison of the proposed model with previous techniques used by researchers to detect various chest diseases taken from the same dataset, i.e.,the NIH dataset. It shows that the proposed technique’s accuracy rate of 97.7% is higher than that of the other techniques [7, 35, 53].

Table 7. Comparison with the state-of-the-art techniques using the NIH dataset
Technique Accuracy (%)
Pre-trained models [35] 89.86
Deep learning-based decision tree classifier [53] 89
CNN [7] 91.24
Proposed model (VGG-19) 97.7


Table 8 shows the classification accuracy of the deep learning techniques used by researchers to detect various chest diseases [5456]. Various datasets were used, such as chest X-ray, ImageNet dataset, Guangzhou Women and Children’s Medical Center dataset; and VGG-19 was observed to obtain the highest rate of accuracy at 97.7%.

Table 8. Comparison with state-of-the-art techniques
Technique Dataset Accuracy (%)
CNN [54] Chest Xray 98.9
ResNet [55] ImageNet 96.1
VGG-16 [56] Chest Xray 87
Proposed model (VGG-19) NIH dataset 97.7


Conclusion

Detecting chest disorders via chest X-rays is a complex and essential undertaking in the field of human health. Although much work has already been done on this subject, employing deep and federated learning approaches combined with transfer learning models to identify chest ailments is a novel approach to this task. Using pre-trained models makes detection easier and enhances the system’s accuracy and efficiency. As a result, this work has effectively detected chest ailments by using an integrated transfer learning strategy combined with deep and federated learning approaches. The proposed model used the NIH’s chest illness X-ray dataset, which includes 112,120 chest pictures and the names of the conditions of 30,805 persons. There are 14 different types of chest illnesses. The system first performed pre-processing and exploratory data analysis on the input photos to extract useful information, which was then utilized to extract features rapidly and accurately. The data for training and testing were then separated in order to perform data augmentations. Finally, the data were subjected to various transfer learning models, including VGG-16, MobileNet V2, ResNet-50, DenseNet-161, Inception V3, and VGG-19, the results of which were assessed to determine which model is most suited to identify chest disorders. MobileNet V2 (88.09%) was shown to outperform the other deep learning models in terms of validation accuracy. In contrast, the VGG-19 model outperformed the other transfer learning models in terms of validation accuracy when used with federated learning, with a validation accuracy of 97.71%. Overall, the VGG-16 model proved to have the best accuracy and recall rate. As a result, the F1 score of VGG-16 is automatically the highest. As a result, the classification report reveals that the VGG-16 model detects chest infections accurately and effectively, making it the best transfer learning model for this purpose. With these discoveries, our technique for integration with any system will allow pulmonologists and radiologists to correctly diagnose chest diseases by utilizing various medical imaging sources in less time with better flexibility. However, this study has a drawback in that the proposed approach was only able to categorize the fourteen illnesses included in the dataset.
Moreover, to achieve the best outcomes, the models should also integrate optimization strategies, and the execution time should be factored in. In reality, doctors should aim to incorporate such technologies into their regular diagnosing of patients’ diseases. Furthermore, by recognizing chest problems early on and providing proper treatment, the proposed model could save countless lives. Further classifications of chest problems could be included in the system to produce better results in detecting any type of chest ailment.


Acknowledgements

Jana Shafi would like to thank the Deanship of Scientific Research, Prince Sattam Bin Abdul Aziz University, for supporting this work.


Author’s Contributions

Conceptualization, MB. Funding acquisition, YK, YS, JS. Investigation and methodology, BK, PJ, YK, HP. Project administration, YK, YS, JS. Resources, PJ, YK, YS, JS. Supervision, PJ, YK, YS. Writing of the original draft, BK, PJ, YK, HP. Writing of the review and editing, HP, YS, JS. Software, BK, PJ. Validation, YK, HP, YS, JS. Formal analysis, BK, PJ, YK. Visualization, HP, YS, JS.


Funding

This work was supported by the National Research Foundation of Korea (NRF), funded by the Ministry of Science and ICT (MSIT) of the Korean government (Grant No. 2020R1C1C1003425 and 2020R1A4A3079710).


Competing Interests

The authors declare that they have no competing interests.


Author Biography

Author
Name : Barkha Kakkar
Affiliation : Galgotia’s University,India
Biography : Barkha Kakkar is pursuing PhD in Computer Application from, Galgotia’s University, Greater Noida, India. She is also working as Assistant Professor in Institute of Technology & Science, Mohan Nagar Ghaziabad She completed his M.C.A. from AKGEC Affiliated to Uttar Pradesh Technical University Lucknow. Her research area is Blockchain in Healthcare. 2022.)

Author
Name : Prashant Johri
Affiliation : Galgotia’s University,India
Biography : Prashant Johri is currently a, Professor in School of Computing Science & Engineering, Galgotias University, Greater Noida, India. He received his B.Sc.(H) , M.C.A. from A.M.U. and Ph.D. in Computer Science from Jiwaji University, Gwalior in 2011, India. He has also worked as a Professor and Director (M.C.A.) in G.I.M.T. and N.I.E.T. Gr.Noida. He has supervised 2 PhD students and M.Tech. Students for their thesis. He has published 150 scientific articles. He has published edited books. His research interest includes Artificial Intelligence, Machine Learning, Data Science, Cloud Computing, Block Chain,Healthcare, Agriculture, Image Processing, Software Reliability.

Author
Name : Yogesh Kumar
Affiliation : Indus University,India
Biography : Yogesh Kumar is working as Associate Professor at Indus Institute of Technology & Engineering, Indus University, Rancharda, Ahmedabad. He has done his Ph.D. CSE from Punjabi University, Patiala. Prior to this, he has done his M.Tech CSE from Punjabi university, Patiala. He is having total 14 years of experience including teaching and research with more than 57 Publications in various reputed journals. His research areas include Artificial Intelligence, Deep Learning and Computer Vision.

Author
Name : Hyunwoo Park
Affiliation : Dongguk University-Seoul, Korea
Biography : Hyunwoo Park was born in Cheongju, Republic of Korea in 1996. He received the B.S. degree in Industrial and Systems Engineering from Dongguk University-Seoul, Seoul, Republic of Korea, in 2020. He is currently pursuing the M.S. degree in Industrial and Systems Engineering at the Dongguk University-Seoul. His research interests include machine learning and data analytics, and their applications to industrial process.

Author
Name : Youngdoo Son
Affiliation : Dongguk University-Seoul,Korea
Biography : Youngdoo Son received the M.S. degree in industrial and management engineering from Pohang University of Science and Technology, Pohang, South Korea, in 2012, and the Ph.D. degree in industrial engineering from Seoul National University, Seoul, South Korea, in 2015. He is currently an Assistant Professor with the Department of Industrial and Systems Engineering, Dongguk University at Seoul, Seoul. His research interests include machine learning, neural networks, Bayesian methods, and their industrial and business applications.

Author
Name : Jana Shafi
Affiliation : Prince Sattam bin Abdul Aziz University, KSA
Biography : Jana Shafi is a Lecturer at Prince Sattam bin Abdul Aziz University, KSA. She has published research papers in various International conferences and Journals. Her Research interests include Online Social Networks with technologies of Machine Learning, Deep Learning. She is a member of Elsevier Advisory Panel also a Mendeley Advisor.


References

[1] M. A. Z. Chudhery, J. Xie, M. C. Chou, J. Sim, S. Safdar, and Z. Liu, The National University Hospital: Overcrowding in the Emergency Department. London, Canada: Ivey Publishing, 2018.
[2] S. Bharati, P. Podder, and M. R. H. Mondal, “Hybrid deep learning for detecting lung diseases from X-ray images,” Informatics in Medicine Unlocked, vol. 20, article no. 100391, 2020. https://doi.org/10.1016/j.imu.2020.100391
[3] A. Abuhamdah, G. M. Jaradat, and M. Alsmadi, “Deep learning for COVID-19 cases-based XCR and chest CT images,” in Advances on Smart and Soft Computing. Singapore: Springer, 2022, pp. 285-299.
[4] D. C. Nguyen, M. Ding, P. N. Pathirana, and A. Seneviratne, “Blockchain and AI-based solutions to combat coronavirus (COVID-19)-like epidemics: a survey,” IEEE Access, vol. 9, pp. 95730-95753, 2021.
[5] A. K. Saha and M. Rahman, “An efficient deep learning approach for detecting pneumonia using the convolutional neural network,” in Sentimental Analysis and Deep Learning. Singapore: Springer, 2022, pp. 59-68.
[6] O. Akbarzadeh, M. Baradaran, and M. R. Khosravi, “IoT solutions for smart management of hospital buildings: a general review towards COVID-19, future pandemics, and infectious diseases,” Current Signal Transduction Therapy, vol. 16, no. 3, pp. 240-246, 2021.
[7] L. O. Hall, R. Paul, D. B. Goldgof, and G. M. Goldgof, “Finding COVID-19 from chest X-rays using deep learning on a small dataset,” 2020 [Online]. Available: https://arxiv.org/abs/2004.02060.
[8] B. Alsinglawi, O. Mubin, F. Alnajjar, K. Kheirallah, M. Elkhodr, M. Al Zobbi, et al., “A simulated measurement for COVID-19 pandemic using the effective reproductive number on an empirical portion of population: epidemiological models,” Neural Computing and Applications, 2021. https://doi.org/10.1007/s00521-021-06579-2
[9] A. Muniasamy, R. Bhatnagar, and G. Karunakaran, “Development of disease diagnosis model for CXR images and reports: a deep learning approach,” in Medical Informatics and Bioimaging Using Artificial Intelligence. Cham, Switzerland: Springer, 2022, pp. 153-171.
[10] M. Hon and N. M. Khan, “Towards Alzheimer's disease classification through transfer learning,” in Proceedings of 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Kansas City, MO, 2017, pp. 1166-1169.
[11] M. A. Z. Chudhery, S. Safdar, J. Huo, H. U. Rehman, and R. Rafique, “Proposing and empirically investigating a mobile-based outpatient healthcare service delivery framework using stimulus–organism–response theory,” IEEE Transactions on Engineering Management, 2021. https://doi.org/10.1109/TEM.2021.3081571
[12] O. Akbarzadeh, M. Baradaran, and M. R. Khosravi, “IoT-based smart management of healthcare services in hospital buildings during COVID-19 and future pandemics,” Wireless Communications and Mobile Computing, vol. 2021, article no. 5533161, 2021. https://doi.org/10.1155/2021/5533161
[13] J. Feng and J. Jiang, “Deep learning-based chest CT image features in diagnosis of lung cancer,” Computational and Mathematical Methods in Medicine, vol. 2022, article no. 4153211, 2022. https://doi.org/10.1155/2022/4153211
[14] J. Peng, S. Kang, Z. Ning, H. Deng, J. Shen, Y. Xu, et al., “Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging,” European Radiology, vol. 30, pp. 413-424, 2020.
[15] F. Ali, S. El-Sappagh, S. R. Islam, D. Kwak, A. Ali, M. Imran, and K. S. Kwak, “A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion,” Information Fusion, vol. 63, pp. 208-222, 2020.
[16] National Institutes of Health, “NIH Clinical Center provides one of the largest publicly available chest x-ray datasets to scientific community,” 2017 [Online]. Available: https://www.nih.gov/news-events/news-releases/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community.
[17] S. Chae, S. Kwon, and D. Lee, “Predicting infectious disease using deep learning and big data,” International Journal of Environmental Research and Public Health, vol. 15, no. 8, article no. 1596, 2018. https://doi.org/10.3390/ijerph15081596
[18] A. Kishor and C. Chakraborty, “Artificial intelligence and internet of things based healthcare 4.0 monitoring system,” Wireless Personal Communications, 2021. https://doi.org/10.1007/s11277-021-08708-5
[19] A. Bhattacharyya, D. Bhaik, S. Kumar, P. Thakur, R. Sharma, and R. B. Pachori, “A deep learning based approach for automatic detection of COVID-19 cases using chest X-ray images,” Biomedical Signal Processing and Control, vol. 71, article no. 103182, 2022. https://doi.org/10.1016/j.bspc.2021.103182
[20] P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM,” Sensors, vol. 21, no. 8, article no. 2852, 2021. https://doi.org/10.3390/s21082852
[21] M. F. Ijaz, M. Attique, and Y. Son, “Data-driven cervical cancer prediction model with outlier detection and over-sampling methods,” Sensors, vol. 20, no. 10, article no. 2809, 2020. https://doi.org/10.3390/s20102809
[22] A. Kishor, C. Chakraborty, and W. Jeberson, “Reinforcement learning for medical information processing over heterogeneous networks,” Multimedia Tools and Applications, vol. 80, no. 16, pp. 23983-24004, 2021.
[23] L. H. Vogado, R. M. Veras, F. H. Araujo, R. R. Silva, and K. R. Aires, “Leukemia diagnosis in blood slides using transfer learning in CNNs and SVM for classification,” Engineering Applications of Artificial Intelligence, vol. 72, pp. 415-422, 2018.
[24] J. Liu, S. W. Chen, F. Liu, Q. P. Li, X. Y. Kong, and Z. C. Feng, “The diagnosis of neonatal pulmonary atelectasis using lung ultrasonography,” Chest, vol. 147, no. 4, pp. 1013-1019, 2015.
[25] N. Ullmann, M. L. D'Andrea, A. Gioachin, B. Papia, M. B. C. Testa, C. Cherchi, C. Bock, P Toma, and R. Cutrera, “Lung ultrasound: a useful additional tool in clinician's hands to identify pulmonary atelectasis in children with neuromuscular disease,” Pediatric Pulmonology, vol. 55, no. 6, pp. 1490-1494, 2020.
[26] H. Behzadi-Khormouji, H. Rostami, S. Salehi, T. Derakhshande-Rishehri, M. Masoumi, S. Salemi, et al., “Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images,” Computer Methods and Programs in Biomedicine, vol. 185, article no. 105162, 2020. https://doi.org/10.1016/j.cmpb.2019.105162
[27] J. Na'am, J. Harlan, G. W. Nercahyo, S. Arlis, and L. N. Rani, “Detection of infiltrate on infant chest X-ray,” Telkomnika, vol. 15, no. 4, pp. 1943-1951, 2017.
[28] A. Gooßen, H. Deshpande, T. Harder, E. Schwab, I. Baltruschat, T. Mabotuwana, N. Cross, and A. Saalbach, “Deep learning for pneumothorax detection and localization in chest radiographs,” 2019 [Online]. https://arxiv.org/abs/1907.07324.
[29] Y. H. Chan, Y. Z. Zeng, H. C. Wu, M. C. Wu, and H. M. Sun, “Effective pneumothorax detection for chest X-ray images using local binary pattern and support vector machine,” Journal of Healthcare Engineering, vol. 2018, article no. 2908517, 2018. https://doi.org/10.1155/2018/2908517
[30] M. I. Campo, J. Pascau, and R. S. J. Estepar, “Emphysema quantification on simulated X-rays through deep learning techniques,” in Proceedings of 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 273-276.
[31] R. Jain, P. Nagrath, G. Kataria, V. S. Kaushik, and D. J. Hemanth, “Pneumonia detection in chest X-ray images using convolutional neural networks and transfer learning,” Measurement, vol. 165, article no. 108046, 2020. https://doi.org/10.1016/j.measurement.2020.108046
[32] M. F. Hashmi, S. Katiyar, A. G. Keskar, N. D. Bokde, and Z. W. Geem, “Efficient pneumonia detection in chest X-ray images using deep transfer learning,” Diagnostics, vol. 10, no. 6, article no. 417, 2020. https://doi.org/10.3390/diagnostics10060417
[33] A. Saito, Y. Hakamata, Y. Yamada, M., Sunohara, M. Tarui, Y. Murano, et al., “Pleural thickening on screening chest X-rays: a single institutional study,” Respiratory Research, vol. 20, article no. 138, 2019. https://doi.org/10.1186/s12931-019-1116-9
[34] S. S. Alghamdi, I. Abdelaziz, M. Albadri, S. Alyanbaawi, R. Aljondi, and A. Tajaldeen, “Study of cardiomegaly using chest X-ray,” Journal of Radiation Research and Applied Sciences, vol. 13, no. 1, pp. 460-467, 2020.
[35] S. Candemir, S. Rajaraman, G. Thoma, and S. Antani, “Deep learning for grading cardiomegaly severity in chest X-rays: an investigation,” in Proceedings of 2018 IEEE Life Sciences Conference (LSC), Montreal, Canada, 2018, pp. 109-113.
[36] C. H. Liang, Y. C. Liu, M. T. Wu, F. Garcia-Castro, A. Alberich-Bayarri, and F. Z. Wu, “Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice,” Clinical Radiology, vol. 75, no. 1, pp. 38-45, 2020.
[37] Y. Pathak, P. K. Shukla, A. Tiwari, S. Stalin, and S. Singh, “Deep transfer learning based classification model for COVID-19 disease,” IRBM, vol. 43, no. 2, pp. 87-92, 2022.
[38] S. Minaee, R. Kafieh, M. Sonka, S. Yazdani, and G. J. Soufi, “Deep-COVID: predicting COVID-19 from chest X-ray images using deep transfer learning,” Medical Image Analysis, vol. 65, article no. 101794, 2020. https://doi.org/10.1016/j.media.2020.101794
[39] T. Rahman, M. E. Chowdhury, A. Khandakar, K. R. Islam, K. F. Islam, Z. B. Mahbub, M. A. Kadir, and S. Kashem, “Transfer learning with deep convolutional neural network (CNN) for pneumonia detection using chest X-ray,” Applied Sciences, vol. 10, no. 9, article no. 3233, 2020. https://doi.org/10.3390/app10093233
[40] T. N. Poly, M. M. Islam, Y. C. J. Li, B. Alsinglawi, M. H. Hsu, W. S. Jian, and H. C. Yang, “Application of artificial intelligence for screening COVID-19 patients using digital images: meta-analysis,” JMIR Medical Informatics, vol. 9, no. 4, article no. e21394, 2021. https://doi.org/10.2196/21394
[41] F. Ali, A. Ali, M. Imran, R. A. Naqvi, M. H. Siddiqi, and K. S. Kwak, “Traffic accident detection and condition analysis based on social networking data,” Accident Analysis & Prevention, vol. 151, article no. 105973, 2021. https://doi.org/10.1016/j.aap.2021.105973
[42] Y. Kumar and M. Mahajan, “Recent advancement of machine learning and deep learning in the field of healthcare system,” Computational Intelligence for Machine Learning and Healthcare Informatics, 2020. https://doi.org/10.1515/9783110648195
[43] J. Choe, H. J. Hwang, J. B. Seo, S. M. Lee, J. Yun, M. J. Kim, et al., “Content-based image retrieval by using deep learning for interstitial lung disease diagnosis with chest CT,” Radiology, vol. 302, no. 1, pp. 187-197, 2022.
[44] H. K. Bhuyan, C. Chakraborty, Y. Shelke, and S. K. Pani, “COVID‐19 diagnosis system by deep learning approaches,” Expert Systems, vol. 39, no. 3, article no. e12776, 2022. https://doi.org/10.1111/exsy.12776
[45] N. Jain, V. Gupta, S. Shubham, A. Madan, A. Chaudhary, and K. C. Santosh, “Understanding cartoon emotion using integrated deep neural network on large dataset,” Neural Computing and Applications, 2021. https://doi.org/10.1007/s00521-021-06003-9
[46] M. R. Khosravi, S. Samadi, and R. Mohseni, “Spatial interpolators for intra-frame resampling of SAR videos: a comparative study using real-time HD, medical and radar data,” Current Signal Transduction Therapy, vol. 15, no. 2, pp. 144-196, 2020.
[47] A. Paul, A. Basu, M. Mahmud, M. S. Kaiser, and R. Sarkar, “Inverted bell-curve-based ensemble of deep learning models for detection of COVID-19 from chest X-rays,” Neural Computing and Applications, 2022. https://doi.org/10.1007/s00521-021-06737-6
[48] N. Shahparian, M. Yazdi, and M. R. Khosravi, “Alzheimer disease diagnosis from fMRI images based on latent low rank features and support vector machine (SVM),” Current Signal Transduction Therapy, vol. 16, no. 2, pp. 171-177, 2021.
[49] H. Malik and T. Anees, “BDCNet: multi-classification convolutional neural network model for classification of COVID-19, pneumonia, and lung cancer from chest radiographs,” Multimedia Systems, vol. 28, pp. 815-829, 2022.
[50] L. J. Muhammad, E. A. Algehyne, S. S. Usman, A. Ahmad, C. Chakraborty, and I. A. Mohammed, “Supervised machine learning models for prediction of COVID-19 infection using epidemiology dataset,” SN Computer Science, vol. 2, article no. 11, 2021. https://doi.org/10.1007/s42979-020-00394-7
[51] P. Dwivedi, “Understanding and coding a ResNet in Keras,” 2019 [Online]. Available: https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33.
[52] Y. Kumar and R. Singla, “Federated learning systems for healthcare: perspective and recent progress,” in Federated Learning Systems. Cham, Switzerland: Springer, 2021, pp. 141-156.
[53] S. H. Yoo, H. Geng, T. L. Chiu, S. K. Yu, D. C. Cho, J. Heo, et al., “Deep learning-based decision-tree classifier for COVID-19 diagnosis from chest X-ray imaging,” Frontiers in Medicine, vol. 7, article no. 427, 2020. https://doi.org/10.3389/fmed.2020.00427
[54] T. Torabipour, Y. Jahangirigolshavari, and S. Siadat, “A deep learning approach for diagnosis chest diseases,” International Journal of Web Research, vol. 4, no. 1, pp. 10-17, 2021.
[55] N. Wang, H. Liu, and C. Xu, “Deep learning for the detection of COVID-19 using transfer learning and model integration,” in Proceedings of 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 2020, pp. 281-284.
[56] E. Ayan and H. M. Unver, “Diagnosis of pneumonia from chest X-ray images using deep learning,” in Proceedings of 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, 2019, pp. 1-5.

About this article
Cite this article

Barkha Kakkar1, Prashant Johri1, Yogesh Kumar2, Hyunwoo Park3, Youngdoo Son3, and Jana Shafi4,*, An IoMT-Based Federated and Deep Transfer Learning Approach to the Detection of Diverse Chest Diseases Using Chest X-Rays, Article number: 12:24 (2022) Cite this article 1 Accesses

Download citation
  • Received26 November 2021
  • Accepted21 January 2022
  • Published30 May 2022
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords