홈으로ArticlesAll Issue
ArticlesBiologically Inspired CNN Network for Brain Tumor Abnormalities Detection and Features Extraction from MRI Images
  • Chetan Swarup1, Ankit Kumar2, KamredUdham Singh3,*, Teekam Singh4, Linesh Raja5, Abhishek Kumar6, and Ramu Dubey7

Human-centric Computing and Information Sciences volume 12, Article number: 22 (2022)
Cite this article 2 Accesses
https://doi.org/10.22967/HCIS.2022.12.022

Abstract

Image segmentation has become increasingly important in medical image analysis, but it does not remain easy to solve. Medical imaging is becoming more relevant since there is growing demand for automated, dependable, quick, and efficient diagnosis methods that result in better outcomes. With its billions of cells, the brain is one of the most complex organs in the human body. Males aged 20 to 39 are more likely than females in this age group to die from a brain tumor, which is the second leading cause of cancer-related mortality in men. Brain tumors can be unpleasant and cause various illnesses. Therefore, its early detection is critical as well as a trustworthy approach. It is critical to identify a benign or malignant tumor to diagnose it, and so tumor diagnosis is an essential step in the treatment process. The use of magnetic resonance imaging (MRI) in the detection of brain tumors is extremely beneficial. This paper will discuss a method that uses fundamental image processing techniques to provide tumor-specific information. These techniques include noise reduction, image sharpening, and morphological functions such as erosion and dilation to obtain the backdrop. This paper creates tumor images by removing the backdrop and their negatives from various photographs. Drawing the outline of the tumor and labeling it with a c-label provides us with information about the tumors that can better visualize cases when diagnosing them. The proposed method can determine the tumor size, shape, and location with an accuracy rate of 97.5%. It helps the medical team and patient understand the seriousness of the tumor’s condition.


Keywords

Brain Tumor, Segmentation, Classification, Accuracy, Machine learning, Deep Learning, CNN, MRI Image


Introduction

The driving force behind the initiative is creating a better system in which medical professionals and patients can effectively interact to achieve better results. This improved technique, which employs cutting-edge technology, will simplify data interpretation for medical experts, giving them more time to think and work. Brain or central nervous system tumors are defined as abnormal cell proliferation. Malignant tumors should be diagnosed and treated as soon as possible [1]. Because the aetiology and specific symptoms of brain tumors are unknown, people may suffer from it without realizing they are in danger. Primary brain tumors are classified as malignant or benign based on the presence or absence of cancer cells, respectively (do not contain the cancer cells). The symptoms of a brain tumor [2] are determined by the tumor’s location, size, and type. It happens when a tumor puts pressure on nearby cells, causing them to compress. The same thing happens when a tumor obstructs the passage of fluid through the brain. Common symptoms include headaches, nausea, vomiting, and vertigo (trouble balancing and walking). A brain tumor can be detected using imaging techniques such as a computed tomography (CT) scan [3], Magnetic resonance imaging (MRI),ultrasound, single photon emission computed tomography (SPECT), positron emission tomography (PET), and X-ray [4]. Their approaches vary depending on the areas in which they specialize and the reason for the evaluation. MRI is the most widely used medical imaging technique, providing better contrast images of brain and cancer cells than other imaging modalities. As a result, MRI scans may now be used to detect brain tumors [5]. The MRI procedure does not involve any implantation or medication administration into the patient’s body. Radiation does not affect the human body with an entirely risk-free procedure.
Furthermore, MRI is particularly useful for diagnosing brain diseases because it can precisely position soft tissues while detecting disease characteristics because of its high resolution. The primary goal of this study is to remove malignancies from MRI brain images, so that everyone can see them. Thanks to its improved contrast discrimination and ability to acquire images from various angles, an MRI can provide a more precise image of the specific lesion site.

Motivation
The goal of this work is to identify malignancies through brain MRI imaging. The primary purpose of looking for brain tumors is to help in more rapid and accurate clinical diagnosis. The objective is to develop a reliable algorithm for identifying cancers in MRI brain images by integrating many approaches. Filtering, image subtraction, erosion, dilation, thresholding, and tumor outlining are some techniques employed. The essential purpose of brain tumor medical imaging is to extract as much accurate and valuable information as possible from these images while making as few mistakes as possible and then assess whether or not the image is of a tumor. This work suggests an automated classification of brain MRI images based on existing pixel intensity and anatomical features. Because there are presently no widely acknowledged methods for tumor detection, there are a significant need and interest in developing automated and reliable systems. While convolutional neural network (CNN) has been used to categorize vast volumes of data for MR image difficulties, it has yet to reach its full potential. They featured clustering and classification algorithms for MRI imaging issues involving vast volumes of data, which would take time and effort to solve manually. To construct neural network systems for medical difficulties, it is essential to grasp identification, classification, and clustering techniques.
The major contribution of this work is to develop a model for brain tumor abnormalities detection and features extraction using CNN and computer vision techniques to identify persons who may have brain tumors into one of three categories of astrocytoma (AST), oligodendroglioma (OLI), and glioblastoma multiforme (GBM). We have automated the process of brain cropping from MRI scans and classified the brain tumor stages and their types. We used the discrete wavelet transform (DWT) method with CNN and deep neural networks (DNNs) to provide more accurate results in less time and with less power consumption.


Related Work

This section will go over the review method used for the research report and give a quick rundown of its strengths and weaknesses. Each study article is succinctly summarized using a comparison table of the others.

Categorical Review
This study [6] used machine learning techniques to detect brain tumors automatically. Gray level co-occurrence matrix (GLCM) is used to extract texture-based properties from a texture during feature extraction. Machine learning algorithms multi-layer perceptron and naive Bayes are used to classify 212 brain MR image samples, with accuracy rates of 98.6% and 91.6%, respectively, for the multi-layer perceptron and naive Bayes methods.
A successful classification technique [7] was used to segment brain MRI images, which was detailed in this paper. Image band quality analysis is used to estimate signal intensity during MRI scans. The morphological examination of the classified image produces a more accurate and reliable result than the categorization method. Finally, the malignant tumor was found and removed.
This article prepares images using the median filtering and skull stripping methods proposed in previous work [8]. As a result, performance has improved. The features were extracted using GLCM methods, and the classifier was built using support vector machine (SVM). The SVM technique yielded favorable results for the classification task at hand, with 91.52% sensitivities, 67.74% specificities, and an accuracy rate of 83.33%.
According to the results, the suggested technique in this study [9] has a recognition rate of 94.28% on photos with tumors and 100% on images without tumors. Our system has a total success rate of 96%, indicating that it outperforms its competitors in terms of performance.
According to this study [10], rapid image segmentation outperforms traditional methods in terms of edge details, can maintain clustering optimization performance while lowering operational costs, and significantly increases segmentation efficiency. This study [11] combines DWT and DNN to categorize brain MRIs into three types of benign and malignant brain tumors, respectively.

Comparative Studies of Existing Approaches
The literature review identified that research brain abnormality highly depends on position, shape, and size [1217] (Table 1). The model’s accuracy depends on the feature matrix used in tanning the model, which requires high computational power and time.

Review Process Adopted
In this section, we review the different classification issues that necessitate various performance measures, such as area under the curve (AUC) and other classification performance measures.

Confusion matrix
The confusion matrix is an essential tool for assessing the correctness and accuracy of a model. This approach is used to tackle classification issues in which the output can be categorized into two or more types. We’re aiming to solve a classification problem, such as assessing whether a person has cancer or not. We give our target variable a name. When someone gets cancer, they are referred to as a “1.” A person is considered to be a “0” if they do not have cancer. The confusion matrix has two dimensions of “Actual” and “Predicted,” each with its own set of “Classes.” Columns reflect our actual classifications, whereas rows represent our expected categories (Fig. 1).
As a performance metric in and of itself, the confusion matrix is not dependent on the matrix or the statistics it contains, even though almost all performance metrics do so.

Table 1. Comparative studies of existing approaches
Method Limitations Solution approach Approach
Selecting morphological operators as a feature [12] Calculating the underlying variable's probability density; function takes a longer. Using OLS backward elimination, the number of features may be decreased while still solving the time issue. Selecting morphological operators as a feature
Classification of brain cancers using deep neural networks [13] Neural network using deep learning for classifying. Very difficult to develop a generic technique that would work with brain MR images from many universities and scanners. Reduce processing power by combining deep neural networks with SVM for improved efficiency and accuracy.
K-means clustering for tumor segmentation in the brain [14] Clustering with K-means It's quite tough to establish the thresholding factor. By utilizing fuzzy C means clustering rather than K-means clustering.
SVM-based tumor detection and classification in the brain [15] SVM SVM-based tumor detection and classification in the brain. Better results will be achieved by combining SVM with CNN.
Machine learning algorithms for brain tumor detection [16] Model learning paradigms and naive Bayes This algorithm's model-building time is longer than the current one. If utilized on a limited basis, this technique will be very successful.
Segmentation of brain tumors using a novel threshold method [17] A new threshold approach Classification errors occur on occasion, rather than in a robotic manner. Perhaps different categorization techniques, such as neuro-fuzzy and support vector machine, should be explored.


Fig. 1. Confusion matrix.


Terms associated with confusion matrix
- True positive (TP): When a data point’s actual class is a “1” (True), and its predicted class is a “1” as well, it is said to be a TP (True).
- True negatives (TN): There are situations where the projected value is also a “0” (False), which are known as “true negatives” (False).
- False positives (FP): FPs occur when a data point’s actual class is a “0” (False), but the projected class is a “1” (True). In the case of false, it’s because the model got it wrong, and in the case of positive, it’s because the class got it right.
- False negatives (FN): FNs occur when a data point’s real class is a “1” (True), but the predicted class is a “0” (False). The model made incorrect predictions, and the class expected a bad outcome. We’d all prefer it if the model produced no FPs nor FNs in any given situation. However, no model can ever be completely accurate.
- Accuracy: Classification accuracy measures how many right predictions the model makes out of all the possible predictions (Fig. 2).
If we look at our numerator, we have our correct predictions (TPs and TNs). If we look at our denominator, we have the algorithm’s predictions (right and wrong ones). For a good measure of accuracy, make sure that the classes of target variables in the data are approximately equal in size. For knowing when NOT to use accuracy: when there is a majority of one variable class in the data, accuracy should never be employed as a metric.

Fig. 2. Confusion matrix accuracy.


- Precision: In terms of accuracy, we know how many of the cancer patients we diagnosed had the disease in the first place. TP and FP have projected positives (those expected to be malignant), and TP are cancer patients.
- Recall or Sensitivity: Recall is a metric that tells us how many cancer patients were incorrectly labeled by the algorithm. Cancer patients are TP and FN, whereas cancer patients diagnosed due to using the model are TP. For completeness, we’ve added FN because, contrary to what the model predicted, the individual did get cancer (Fig. 3).

Fig. 3. Recall in confusion matrix.


- Specificity: For example, the model’s specificity can tell us how many patients who did not have cancer were also projected as non-cancerous by it. The true negatives are FP and TN (those who are actually free of cancer) and TN (those free of cancer we have diagnosed). For completeness, we’ve included FP, even though the model indicated the person would develop cancer. The recall is the antithesis of specificity (Fig. 4).

Fig. 4. Matrix of confusion specificity.


Proposed Work

Deep learning is a subfield of the artificial intelligence (AI) branch of machine learning. This method is used by machine learning to teach computers how to do things naturally. Deep learning algorithms [18] are inspired by the structure and functions of the human brain to analyze data and create patterns for decision-making. Deep learning techniques can train a computer model to solve classification problems. Because deep learning models are so accurate, they can outperform humans in terms of accuracy in some cases. Models are built by combining a large amount of labeled data with a high level of complication. Deep learning can learn from vast amounts of unstructured data in a fraction of the time it would take humans. It uses neural networks in the same way that the human nervous system does, and a DNN is another name for this technology.

Deep Learning in Medical Image Analysis
The most significant advantage of deep learning is its accuracy. Deep learning is a common AI approach for analyzing vast amounts of data. It is a self-adaptive algorithm that improves analysis and patterns as it gains experience or introduces more data. Thanks to deep learning, recognition accuracy has never been higher than it is now. This benefits consumer electronics, and safety-critical applications such as autonomous vehicles rely heavily on it. Deep learning has recently advanced to the point where it now outperforms humans in specific tasks, such as object recognition in photographs [19].
There are two main reasons it has only recently become useful:
1) Deep learning demands a vast quantity of labeled data to be effective. For example, millions of photographs and numerous video hours are required to develop a self-driving automobile.
2) Secondly, a significant investment in computer resources is required for deep learning. With their parallel architecture, high-end GPUs are suited for deep learning applications. When utilized in conjunction with clusters or cloud computing, deep learning network training may be cut from weeks to days or even hours for development teams, saving on time and money [20]. Recurrent neural networks (RNNs) are a type of neural network that self-repeats.
It’s also known as an RNN because it uses the results of previous steps as inputs for future efforts. There is no dependency between the inputs and outputs in conventional neural networks. However, there are times when the words before them must be remembered, such as when attempting to predict the next word in a sentence. As a result, RNN was created to address this problem via a hidden layer. The hidden state memory, which saves specific sequence information, is the essential feature of RNN. RNNs have a “memory” that stores all data gathered during a calculation [15]. It performs the same operation on all inputs or hidden layers to produce the same result with the same settings. This neural network has fewer parameters than others and is simpler to use.

Convolutional Neural Networks
Artificial neural systems rule the world of machine learning. Artificial neural networks (ANNs) are a great tool to have on hand for various classification-related tasks. CNNs (along with other types of neural networks) are used in image classification, whereas RNNs (particularly a long short-term memory [LSTM]) are used in word prediction [16]. With your help, we’ll lay the groundwork for CNN’s future presence on this website. Here, let us first review the fundamentals of neural networks before delving into the CNN. There are three types of layers in a typical neural network as follows:
Input layers: At this point, all of the data will be entered into our model. This layer contains the same number of neurons as the number of characteristics in our data (number of pixels for an image) [17].
Hidden layer: The hidden layer receives and processes input before returning it to the input layer for additional processing. We believe there may be more layers beneath the surface that have yet to be discovered due to the scale of our model and amount of data we have. No matter how many layers are buried, there will always be more neurons than features. The output of each successive layer is calculated by multiplying the previous layer’s output by an activation function, resulting in nonlinearity in the network. The previous layer’s learned weights and biases are applied [18].
Output layer: As shown below [19], a logistic function, such as sigmoid or softmax, is used to modify the output of the hidden layer to obtain probability ratings for each class.
The feedforward phase begins after the input data has been fed into the model and the output of each layer has been generated. The magnitude of the error will then be determined using an error function. Then, we compute the derivatives of the original model to go back in time. Backpropagation [20] is a technique used to reduce losses.

Convolution
A desire is for a computational neural network that shares the parameters among nodes. Be sure to take into account the following steps: you’ve formed an image in your mind. The image’s three sides are also known as cuboids (as images generally have red, green, and blue channels). Consider using a small portion of this image to train an artificial neural network. If you want an image with different widths, heights, and depths, simply move the neural network around in the image with your mouse. More media has been added to the original R, G, and B channels, which more comprehensive and taller channels have replaced. This method is referred to as convolution. The patch size of a typical neural network will be the same as the image size. Given the small patch, we’ve reduced our weights [21].
A small amount of mathematics must be discussed because of the convolution process.
1) The layers are learnable convolutional filters (a patch in the above image). The filter widths and heights are reduced, but the input volume remains constant across all filters (“3” if the input layer is image input).
2) Consider the case of having to perform convolution on a 34×34×3-pixel image. a×a×3 is an example of filter size. However, this is insignificant when compared to the overall image size.
3) Slithering each filter through all of the input data in stride steps (which maybe 2, 3, or even 4 for high-dimensional images) and computing a dot product between the weights of each filter and patch from that input data is required for performing a forward pass on a picture.
4) After each filter slide, we’ll get a 2D output, which we’ll stack to get a 3D output volume with a depth equal to the number of filters we stacked. The network as a whole will be made aware of all of the filters [22].
Layers used to build convent’s
1) Each layer in a convent has a differentiable function that alters the volume.
2) Image of 34×34×3 as an example and run a covet on it.
3) Input layer: A 32×32-pixel image’s raw data is stored in this layer.
4) Convolution layer: By calculating the dot product of all filter and image patch values, this layer determines the output volume of the picture. After all, 12 filters have been applied, the output volume of this layer will be 32 32×12 (Fig. 5).

Fig. 5. Convolution layer in CNN.


Activation function layer: The output of the convolution layer will be activated one element at a time in this layer using an element-by-by-by function. There are many others, in addition to ReLU: Max (0, x), Sigmoid: 1/(1+e-x), Tanh, Leaky ReLU [23], and various activation functions. ReLU is a popular activation function. The dimensions will remain the same because the output volume is the same size regardless of the size change.
Pool layer: It helps to reduce volume and memory consumption by preventing overfitting from speeding up calculation and minimizing memory usage with regular inserts of this layer. Pooling layers can be of various shapes and sizes with maximum pooling being the most common and average pooling being the least common. If we use a maximum pool with two filters and stride 2, the final volume will be 16×16×12[24] (Fig. 6).

Fig. 6. Pooling layer in CNN.


Fully-connected layer: This layer receives input from the previous layer and calculates class scores, resulting in 1D arrays with sizes equal to the input’s class count (Fig. 7).

Fig. 7. Fully-connected layer in CNN.


GLCM
Graycomatrix [25] is used to build the GLCM matrix, a matrix of coefficients. Furthermore, the function calculates how frequently the pixel with the value I appear to be in the same spatial region as the pixel with value J in the image, as determined by the image. To compute the GLCM matrix, the function automatically locates the pixel that has to be processed and any adjacent pixels that are horizontally and immediately next to the pixel that is being processed. The user also can specify the spatial link between the pixels in MATLAB, which is another useful feature. Given the fact that each pixel in the final graycomatrix carries the value I, which is just the integer indicating how frequently pixels with the same intensity value were in the input matrix, this method of determining the graycomatrix aids in representing the textural aspects of an image in a picture. Once applied to the input picture, the function results in the image being converted into a matrix of the same size as the original image (initial matrix). The value in [1,1] of the resultant graycomatrix function matrix is a “1” because there is only one instance in the input matrix where two horizontal pixels value 1 as their respective values. This is the case in the resulting graycomatrix function matrix. It is configured to function in any orientation by default but may be customized to run by setting the default orientation to horizontal. The number “2” may appear in the next value of the output matrix, [1,2] if the values (1,2) occur horizontally in the input matrix twice. The reason is that the values (1,2) may occur twice in the input matrix in a horizontal fashion. To illustrate, the third number, [1,3], has the value 0 since there are no [1,3] occurrences of importance in the input matrix organized horizontally. After the operation, the graycomatrix is formed, once all of the pixels in the input matrix have been processed by this function [25,26].
Process for creating the GLCM is shown in Fig. 8.

Classification using Deep Learning
A powerful technique for picture identification and classification, convolutional neural networks (ConvNets or CNNs) may identify and categorize images. Humans, objects, diseases, and traffic signs have been recognized using convolutional neural networks, which have also been utilized for power robots and self-driving cars. CNNs are shown in Fig. 8. They classify images into four categories of dog, cat, boat, and bird (the original LeNet was used mainly for character recognition tasks). As seen in Fig. 8, using a boat photo as input reliably gives the most significant probability (0.94) to the boat category out of all four categories. The likelihood of the output layer should match the overall probability of the output layer (explained later in this post). Convolution networks comprise three layers of an input layer, output layer, and one or more hidden layers, as shown in the diagram.

Fig. 8. Process used to create GLCM matrix.


In contrast to a traditional neural network, the neurons in the layers of convolution networks are structured in three dimensions (width, height, and depth dimensions). CNN can generate a 2D volume from a three-dimensional input. Convolution, pooling, normalizing, and ultimately connected layers are some of the layers concealed from view. CNNs use many convolution layers for large abstract amounts of data. Pooling layers with limited translation and rotation invariance aids in the recognition of objects in unusual situations by convolutional neural networks. Pooling minimizes the amount of memory required, allowing for more convolutional layers. Normalization layers reduce the mean and variance of all inputs in a layer to “0” and “1,” respectively.
ReLU layer: The rectifier activation function f(x) = Max(0, x) is a type of activation function that neurons can employ just like any other activation function; a node that uses the rectifier activation function is referred to as a ReLU node.
Pooling or sub-sampling: Local or global pooling layers in convolutional networks aggregate the outputs of neuron clusters at one layer into a single neuron at the next layer, resulting in a single neuron at the next layer. For example, the highest possible value from each cluster of neurons in the preceding layer is used when using max pooling.
Classification (fully-connected layer): Finally, fully connected layers, which come after a slew of convolutional and max-pooling layers, are responsible for carrying out the high-level reasoning of the neural network. This brings the procedure to a close. As observed in natural neural networks, all prior activations are linked together in a fully-connected neuron (also known as a fully corresponding neuron).

Details of Inputs/Data Used
Tumor detection requires using MRI images of the brain to effectively deal with this issue. The information utilized to build this dataset was taken from the website www.kaggle.com. This collection comprises of 253 MRI pictures. The dataset is divided into two files, labeled “Yes” and “No.” There are 155 tumorous brain MRI pictures in folder “Yes,” and 98 non-tumorous brain MRI images in folder “No.” The images in folder “Yes” are grouped together. Here are a few illustrations to illustrate my point as follows.
For the detection and analysis of brain tumors using computer vision, the data can be employed in Fig 9.

Fig. 9. Dataset with two different labels: (a) class-Yes and (b) class-No.


Experiment and Result

Our investigation is based on MRI scans. We collected as many cases of tumors as we could find. We’ve made several image adjustments, such as rotation, scaling, and mirroring, to increase the number of options and improve performance. In addition, we used a filtering method to remove artefacts from the images—self-assessment of performance [27, 28]. To compare the effectiveness of different approaches, we looked at various factors. Some examples of the variables include accuracy, recall, precision, specificity, sensitivity, computation power, complexity.

Performance Measures
Researchers employed several performance criteria to evaluate the effectiveness of machine learning technologies in their study. They employed precision, recall, F-measure, and accuracy performance measures to analyze and compare proposed prediction models' performance, among other metrics [29, 30]. The accuracy of the test is obtained by dividing the total number of test records by the number of test records that were classified correctly. In terms of precision, it is defined as the ratio percentage of the total number of records accurately categorized as TP to the total number of records recognized for a particular class. The ratio of the total number of adequately organized records to the total number of records in a class, such as the sum of true positives and false negatives, is defined as recall (FN)[31]. The following are the formulas for determining precision, recall, F-measure, and accuracy:

$Precision=\frac{TP_i}{TP_i+FP_i}$(1)

$Recall=\frac{TP_i}{TP_i+FN_i}$(2)

$Recall=\frac{True positive}{True positive+False negative}=\frac{True positive}{Total actual positive}$(3)

$Accuracy=\frac{TP_i+TN_i}{TP_i+TN_i+FN_i+FP_i}$(4)

where, TPiisnumber of records correctly classified to the kidney disease class;FPiis number of records wrongly classified to the kidney disease class; FNiis number of records not classified to the kidney disease class; and TNi is number of images not classified to the correct kidney disease class.

Contrast: contrast is defined as comparing two images, one above the other, in terms of their pixel intensities [27].

$Contrast: = \displaystyle\sum_{n=1}^G n^2 \displaystyle\sum_{i=1}^G  \displaystyle\sum_{j=1}^G |P(x_i,y_j)| and |i-j|=n$(5)

Homogeneity: Similarity in a picture is measured using a concept known as homogeneity (HOM). This is also known as the inverse difference moment (IDM) and is defined as follows:

$Homogeneity= \frac{\displaystyle\sum_{i=1}^G \displaystyle\sum_{j=1}^G |P(x_i,y_j ) |}{1+|i+j|}$(6)

Entropy: Entropy is a metric for assessing the degree of unpredictability in a textural picture as expressed below.

$Entropy: = \displaystyle\sum_{i=1}^G \displaystyle\sum_{j=1}^G |P(x_i,y_j) | \frac{1}{log⁡|P(x_i,y_j)|}$(7)

Dissimilarity: In terms of the angle, the textural characteristic of the picture known as dissimilarity is determined by taking the image’s alignment into account and is expressed as:

$Dissimilarity: = \displaystyle\sum_{i=1}^G \displaystyle\sum_{j=1}^G  |P(x_i,y_j)| |i-j|$(8)

Correlation: In terms of pixels, correlation is a characteristic that represents the spatial relationships between them as defined below.

$Correlation:= \frac{\displaystyle\sum_{i=1}^G \displaystyle\sum_{j=1}^G (x_i,y_i )P(x_i,y_j )-M_x M_y}{σ_x σ_y}$(9)

where $M_x$ and $σ_x$ are the horizontal spatial domain mean and standard deviation, whereas $M_x$ and $σ_y$ are the vertical spatial domain mean and standard deviation, respectively.

Experimental Results for Various Prediction Models
To develop and evaluate the performance of prediction approaches, we used Anaconda, an enterprise-ready, secure, and scalable data science platform, and Spyder (Python 3.6). To assess the performance of the proposed method, we downloaded a kidney disease dataset containing 400 patient records. We preprocessed the data to remove null values and other reasons. We then divided this data set into the two parts of training and testing, with 80 percent of the records in training and 20 percent in testing. We created various prediction models using machine learning algorithms such as logistic regression (LR), naive Bayes, SVM, K-nearest neighbors (KNN), and ANN [32,33].

Prediction Models with All Features
Table 2 and Fig. 10 show the performance of prediction models when all features are considered, or in other words, when no feature selection technique is used. According to the table and graph, the accuracy of LR, naive Bayes, SVM, KNN, and ANN-based prediction models with all features is 97.5%, 97.5%, 66.25%, and 65%, respectively.
These results also show that the accuracy of the LR and SVM algorithm-based prediction models is the highest at 96.8%. In detecting kidney diseases, the ANN-based prediction model has the lowest accuracy. LR and SVM perform similarly and can be used interchangeably for the early detection of kidney diseases. We can also see that LR and SVM-based prediction models have the highest precision, recall, and F-measures values.

Table 2. Results of prediction models with all features
Machine learning algorithms Precision (%) Recall (%) F-measure (%) Accuracy (%)
Logistic regression 97 94 94 96.5
Naive Bayes 94 95 96 94
SVM 96 96 96 96.8
KNN 77 65 65 66.75
ANN 43 64 55 65.05


Fig. 10. Comparative analysis of prediction model with all parameters.


Prediction Models with RFE Feature Selection Technique
Recursive feature elimination (RFE) is a feature selection algorithm of the wrapper type. Internally, it employs filter-based techniques that are distinct from the filter approach. It has two important configuration options, which (i) specifies the number of features to be selected, and (ii) set the machine learning algorithm used in feature selection. The first case searches for a subset of features by considering all of the features in the training dataset and removing them until the required number of features remains. The second case employs a machine learning algorithm that ranks features based on their importance, in which the least important features is removed and then the model fitting process is repeated. The entire procedure is repeated until the specified number of features remains.
Table 3 and Fig. 11 show the prediction models built with basic LR and the RFE feature selection technique.

Table 3. Results of LR model with RFE feature selection technique
Basic logistic regression Logistic regression with RFE feature selection
Precision (%) 98.2 92.04
Recall (%) 97.03 94.05
F-measure (%) 98.03 93.47
Accuracy (%) 97.05 91.48


Fig. 11. Comparison of LR model with and without RFE feature selection.


It has a 97.5% accuracy without feature selection and a 91.25% accuracy with RFE feature selection. It is also discovered that the precision, recall, and F-measure values are higher when the RFE feature selection technique is not used. As a result, we conclude that the accuracy of the basic logistic model is greater than that of the RFE feature selection technique. Table 4 and Fig. 12 represent the outcomes of prediction models constructed using basic SVM and the RFE feature selection technique.

Table 4. Results of SVM model with RFE feature selection technique
Basic SVM SVM with RFE feature selection
Precision (%) 98 98
Recall (%) 97 96
F-measure (%) 98 97
Accuracy (%) 97.5 96.25


Fig. 12. Analysis the result of SVM model with RFE feature selection technique.


From this, we can see that it has a 97.5% accuracy without feature selection and a 96.25% accuracy with RFE feature selection. It is also discovered that the precision, recall, and F-measure values are higher when the RFE feature selection technique is not used. As a result, we conclude that the accuracy of the basic SVM model is greater than that of the RFE feature selection technique. Also, models with and without feature selection techniques are compared. According to the results, the accuracy of the LR model with chi-square feature selection techniques provides the best accuracy in the detection of kidney disease. This result outperforms all other methods for detecting kidney disease. Table 5 represents the results of various LR model combinations, and Fig.13 graphically compares different models’ accuracy.

Table 5. Prediction models with and without various feature selections
Predictionmodel Accuracy (%)
BasicLRmodel 91.25
LRmodel +RFEfeatureselection 97.5
LRmodel +chi-squarefeatureselection(K=5) 92.5
LRmodel +chi-squarefeatureselection (5<K<14) 98.75
LRmodel +chi-squarefeatureselection(K>14) 97.5
The accuracies of the basic LR model, the LR model with RFE feature selection, the LR model with chi-square feature selection (K=5), the LR model with chi-square feature selection (514) are 91.25%, 97.5%, 92.5%, 98.75%, and 97.5%, respectively, as shown in Table 5. It demonstrates that the chi-square method outperforms the RFE feature method inaccuracy. It’s also worth noting that the model produces good results, with 5 to 15 of the best features out of a total of 24. To summarize, we achieved 98.5% accuracy in detecting kidney disease. In comparison to existing approaches, this has the highest accuracy.

Fig. 13. Comparative analysis of different prediction models with and without various feature selections model.


Conclusion and Future Scope

It is possible to develop an accurate and reliable brain tumor segmentation model using a range of machine learning and deep learning approaches. Compared to other machine learning approaches, such as k-means clustering, SVM, threshold approach, etc., the results obtained by employing the convolutional neural network are superior. In this work, the CNN model and computer vision are utilized to identify persons who may have brain tumors into one of three categories (to automate the process of brain cropping from MRI scans). More than other techniques, taking additional train photos or tweaking model hyperparameters may be more effective at improving final accuracy. The DWT may be combined with CNN and DNN to provide more accurate results in less time and with less power consumption than alone. Limitation of brain tumor identification always depends on the variability in tumor location, shape, and size. Going forward, future studies can be undertaken to identify brain cancers more precisely, using actual patient data from any media—various image capture techniques in real-time (scanners).


Author’s Contributions

Chetan Swarup and KamredUdham Singh propose the presented study. Abhishek Kumar and Linesh Raja built the idea and performed the computations. Teekam Singh and Ankit Kumar confirmed the analytical methodologies. Ramu Dubey promoted and supervised the outcomes of this research. All authors considered the findings, and they all contributed to the final publication.


Funding

None.


Competing Interests

The authors declare that they have no competing interests.


Author Biography

Author
Name : Chetan Swarup
Affiliation : Department of Basic Science, College of Science and Theoretical Studies, Saudi Electronic University, Riyadh-Male Campus, Riyadh,Saudi Arabia
Biography : Chetan Swarup has been working as senior assistant professor in the Department of Basic Science at Saudi Electronic University, Riyadh (KSA) since 2015. He received his Ph. D. in Operations Research from CCS University, Meerut (India) in 2009. His research interests are in the field of Optimization Techniques, Differential Equation, Integral Equation etc. He has published several research publications in peer-reviewed journals.

Author
Name : Ankit Kumar
Affiliation : Department of Computer Science and Engineering, Swami Keshvanand Institute of Technology, Management & Gramothan, Jaipur, India
Biography : Ankit Kumar is an assistant professor in the Department of Computer Science at the SKIT in Jaipur, India. He holds a master's degree in technology from the Indian Institute of Technology Allahabad and is now pursuing a doctorate at the Birla Institute of Technology. He has published several articles in national and international journals. Information security, wireless sensor networks, cloud computing image processing, neural networks, and networks are the areas where his work has been highlighted

Author
Name : Kamred Udham Singh
Affiliation : Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
Biography : Kamred Udham Singh received a Ph.D. from Banaras Hindu University, India in 2019. From 2015 to 2016, he was a junior research fellow, and from 2017 to 2019, he was a senior research fellow with UGC (University Grant Commission), India. In 2019, he became an assistant professor at the School of Computing, Graphic Era Hill University, India. He is currently a post-doctoral fellow at the CSIE, NCKU, Taiwan. His research interests include image security and authentication, deep learning, medical image watermarking, and steganography

Author
Name : Teekam Singh
Affiliation : Department of Mathematics, Graphic Era Hill University, Dehradun, India
Biography : Teekam Singh is an Assistant Professor at Department of Mathematics, Graphic Era Hill University, Dehradun, India. He has obtained PhD from Indian Institute of Technology Roorkee (IITR), India, and Master of Technology from Jawaharlal Nehru University (JNU), New Delhi, India. He has four years of teaching experience in the field of Mathematics, Applied Mathematics and Theoretical Computer Science. His research area includes scientific journal Medical Image Processing, Machine Learning, Computer Simulation and Mathematical Biology.

Author
Name : Linesh Raja
Affiliation : Department of Computer Application, Manipal University Jaipur, Jaipur, In-dia
Linesh Raja is currently working as Assistant Professor at Manipal University Jaipur, Rajasthan, India. He earned a PhD in computer science in the year 2015. Before that, he has completed his Master and bachelor’s degrees from Birla Institute of Technology, India. Dr. Linesh has published several research papers in the field of wireless communication, mobile network security in various reputed national and international journals. He is recently appointed as managing editor of the Taru Journal of Sustainable Technologies and Communication. He has edited the Handbook of Research on Smart Farming Technologies for Sustainable Development, IGI Global.

Author
Name : Abhishek Kumar
Affiliation : School of Computer Science and IT, JAIN (Deemed to be University), Be-ngaluru, India
Biography : Abhishek Kumar is currently working at JAIN,Bangalore and worked as an Assistant Professor,Computer Science at Banaras Hindu University,BHU, Varanasi. Dr. Abhishek Kumar is Apple Certified Associate, Adobe Education Trainer & certifiedby Autodesk Also. He has trained over 90,000+students across the globe from 161 Countries. He hasawarded Ph.D., Doctorate in Computer Application(Research Area: Stereoscopy, 3D Animation, Design, Computer Graphics & HCI); April 2018 &Dual master’s degree in Animation & Visual Effects and in Computer Scienceand Bachelor of Science in Multimedia.He has published several research publications in peer-reviewed journals.

Author
Name : Ramu Dubey
Affiliation : Department of Mathematics,J. C. Bose University of Science Technology Faridabad, Haryana, India.
Biography : Ramu Dubey received his Ph.D. in Mathematics from Indian Institute of Technology Roorkee. Roorkee. He has obtained Master of Science (M.Sc.) from Banaras Hindu University (BHU). Varanasi. India. He has more than 07 years of teaching experience in the field of Mathematics. He has published 30 research publications in international journals of repute. His fields of research are Optimization and Nonlinear Dynamics. Currently, He is working as an Assistant Professor at J. C. Bose University of Science Technology Faridabad, Haryana, India.


References

[1] Z. Eksi, M. E. Ozcan, M. Cakiroglu, C. Oz, and A. Aralaşmak, “Differentiation of multiple sclerosis lesions and low-grade brain tumors on MRS data: machine learning approaches,” Neurological Sciences, vol. 42, no. 8, pp. 3389-3395, 2021.
[2] G. Karayegen and M. F. Aksahin, “Brain tumor prediction on MR images with semantic segmentation by using deep learning network and 3D imaging of tumor region,” Biomedical Signal Processing and Control, vol. 66, article no. 102458, 2021. https://doi.org/10.1016/j.bspc.2021.102458
[3] S. Kokkalla, J. Kakarla, I. B. Venkateswarlu, and M. Singh, “Three-class brain tumor classification using deep dense inception residual network,” Soft Computing, vol. 25, no. 13, pp. 8721-8729, 2021.
[4] U. Latif, A. R. Shahid, B. Raza, S. Ziauddin, and M. A. Khan, “An end‐to‐end brain tumor segmentation system using multi‐inception‐UNET,” International Journal of Imaging Systems and Technology, vol. 31, no. 4, pp. 1803-1816, 2021.
[5] K. D. Miller, Q. T. Ostrom, C. Kruchko, N. Patil, T. Tihan, G. Cioffi, et al., “Brain and other central nervous system tumor statistics, 2021,” CA: A Cancer Journal for Clinicians, vol. 71, no. 5, pp. 381-406, 2021.
[6] C. Narasimha and A. N. Rao, “An effective tumor detection approach using denoised MRI based on fuzzy Bayesian segmentation approach,” International Journal of Speech Technology, vol. 24, no. 2, pp. 259-280, 2021.
[7] S. Rasheed, K. Rehman, and M. S. H. Akash, “An insight into the risk factors of brain tumors and their therapeutic interventions,” Biomedicine & Pharmacotherapy, vol. 143, article no. 112119, 2021. https://doi.org/10.1016/j.biopha.2021.112119
[8] A. Srinivasa Redd and P. Chenna Reddy, “MRI brain tumor segmentation and prediction using modified region growing and adaptive SVM,” Soft Computing, vol. 25, no. 5, pp. 4135-4148, 2021.
[9] T. Sadad, A. Rehman, A. Munir, T. Saba, U. Tariq, N. Ayesha, and R. Abbasi, “Brain tumor detection and multi‐classification using advanced deep learning techniques,” Microscopy Research and Technique, vol. 84, no. 6, pp. 1296-1308, 2021.
[10] C. Ma, G. Luo, and K. Wang, “Concatenated and connected random forests with multiscale patch driven active contour model for automated brain tumor segmentation of MR images,” IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1943-1954, 2018.
[11] S. Preethi and P. Aishwarya, “An efficient wavelet-based image fusion for brain tumor detection and segmentation over PET and MRI image,” Multimedia Tools and Applications, vol. 80, no. 10, pp. 14789-14806, 2021.
[12] Z. Ye, K. Srinivasa, A. Meyer, P. Sun, J. Lin, J. D. Viox, et al., “Diffusion histology imaging differentiates distinct pediatric brain tumor histology,” Scientific Reports, vol. 11, article no. 4749, 2021. https://doi.org/10.1038/s41598-021-84252-3
[13] M. Zheng, Q. Du, X. Wang, Y. Zhou, J. Li, X. Xia, et al., “Tuning the elasticity of polymersomes for brain tumor targeting,” Advanced Science, vol. 8, no. 20, article no. 2102001, 2021. https://doi.org/10.1002/advs.202102001
[14] X. Wu, L. Bi, M. Fulham, D. D. Feng, L. Zhou, and J. Kim, “Unsupervised brain tumor segmentation using a symmetric-driven adversarial network,” Neurocomputing, vol. 455, pp. 242-254, 2021.
[15] S. Xiong, G. Wu, X. Fan, X. Feng, Z. Huang, W. Cao, et al., “MRI-based brain tumor segmentation using FPGA-accelerated neural network,” BMC Bioinformatics, vol. 22, article no. 421, 2021. https://doi.org/10.1186/s12859-021-04347-6
[16] M. Ghaffari, A. Sowmya, and R. Oliver, “Automated brain tumor segmentation using multimodal brain scans: a survey based on models submitted to the BraTS 2012–2018 challenges,” IEEE Reviews in Biomedical Engineering, vol. 13, pp. 156-168, 2020.
[17] Z. Tang, S. Ahmad, P. T. Yap, and D. Shen, “Multi-atlas segmentation of MR tumor brain images using low-rank based image recovery,” IEEE Transactions on Medical Imaging, vol. 37, no. 10, pp. 2224-2235, 2018.
[18] N. Noreen, S. Palaniappan, A. Qayyum, I. Ahmad, M. Imran, and M. Shoaib, “A deep learning model based on concatenation approach for the diagnosis of brain tumor,” IEEE Access, vol. 8, pp. 55135-55144, 2020.
[19] G. Hahn and H. J. Mentzel, “Tumors of the central nervous system in children and adolescents,” Radiologe, vol. 61, no. 7, pp. 601-610, 2021.
[20] Z. Huang, Y. Zhao, Y. Liu, and G. Song, “GCAUNet: a group cross-channel attention residual UNet for slice based brain tumor segmentation,” Biomedical Signal Processing and Control, vol. 70, article no. 102958, 2021. https://doi.org/10.1016/j.bspc.2021.102958
[21] C. M. Kharisma, C. A. Arina, and K. M. Iqbal, “The correlation between brain tumor location and onset of neuroophthalmic symptoms in brain tumor patients in Adam Malik General Hospital Medan,” Journal of the Neurological Sciences, vol. 429(Suppl), article no. 118452, 2021. https://doi.org/10.1016/j.jns.2021.118452
[22] S. H. Kim, K. H. Lim, S. Yang, and J. Y. Joo, “Long non-coding RNAs in brain tumors: roles and potential as therapeutic targets,” Journal of Hematology & Oncology, vol. 14, article no. 77, 2021. https://doi.org/10.1186/s13045-021-01088-0
[23] A. Gumaei, M. M. Hassan, M. R. Hassan, A. Alelaiwi, and G. Fortino, “A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification,” IEEE Access, vol. 7, pp. 36266-36273, 2019.
[24] T. Zhou, S. Canu, P. Vera, and S. Ruan, “Latent correlation representation learning for brain tumor segmentation with missing MRI modalities,” IEEE Transactions on Image Processing, vol. 30, pp. 4263-4274, 2021.
[25] G. Wu, Y. Chen, Y. Wang, J. Yu, X. Lv, X. Ju, Z. Shi, L. Chen, and Z. Chen, “Sparse representation-based radiomics for the diagnosis of brain tumors,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 893-905, 2018.
[26] R. L. Kumar, J. Kakarla, B. V. Isunuri, and M. Singh, “Multi-class brain tumor classification using residual network and global average pooling,” Multimedia Tools and Applications, vol. 80, no. 9, pp. 13429-13438, 2021.
[27] X. Lei, X. Yu, J. Chi, Y. Wang, J. Zhang, and C. Wu, “Brain tumor segmentation in MR images using a sparse constrained level set algorithm,” Expert Systems with Applications, vol. 168, article no. 114262, 2021. https://doi.org/10.1016/j.eswa.2020.114262
[28] G. Li, J. Sun, Y. Song, J. Qu, Z. Zhu, and M. R. Khosravi, “Real-time classification of brain tumors in MRI images with a convolutional operator-based hidden Markov model,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1207-1219, 2021.
[29] P. Peruzzi, P. Q. Valdes, M. K. Aghi, M. Berger, E. A. Chiocca, and A. J. Golby, “The evolving role of neurosurgical intervention for central nervous system tumors,” Hematology/Oncology Clinics, vol. 36, no. 1, pp. 63-75, 2022.
[30] H. H. Sultan, N. M. Salem, and W. Al-Atabany, “Multi-classification of brain tumor images using deep neural network,” IEEE Access, vol. 7, pp. 69215-69225, 2019.
[31] R. L. Kumar, J. Kakarla, B. V. Isunuri, and M. Singh, “Multi-class brain tumor classification using residual network and global average pooling,” Multimedia Tools and Applications, vol. 80, no. 9, pp. 13429-13438, 2021.
[32] A. Kishor, C. Chakraborty, and W. Jeberson, “Reinforcement learning for medical information processing over heterogeneous networks,” Multimedia Tools and Applications, vol. 80, no. 16, pp. 23983-24004, 2021.
[33] A. Kishor, C. Chakraborty, and W. Jeberson, “A novel fog computing approach for minimization of latency in healthcare using machine learning,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 7, pp. 7-17, 2020. https://doi.org/10.9781/ijimai.2020.12.004

About this article
Cite this article

Chetan Swarup1, Ankit Kumar2, KamredUdham Singh3,*, Teekam Singh4, Linesh Raja5, Abhishek Kumar6, and Ramu Dubey7, Biologically Inspired CNN Network for Brain Tumor Abnormalities Detection and Features Extraction from MRI Images, Article number: 12:22 (2022) Cite this article 2 Accesses

Download citation
  • Recived31 December 2021
  • Accepted10 February 2022
  • Published15 May 2022
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords