홈으로ArticlesAll Issue
ArticlesMultimodal Medical Image Fusion Based on Pixel Significance Using Anisotropic Diffusion and Cross Bilateral Filter
  • Dawa Chyophel Lepcha1, Ayush Dogra2, Bhawna Goyal1,*, Jasgurpreet Singh Chohan3, Deepika Koundal4, Atef Zaguia5, and Habib Hamam6,7,8

Human-centric Computing and Information Sciences volume 12, Article number: 15 (2022)
Cite this article 1 Accesses
https://doi.org/10.22967/HCIS.2022.12.015

Abstract

Medical image fusion is the process to combine visual information from several medical imaging inputs into a single fused image with no loss of information and distortion. It improves the clinical applications of medical imaging for diagnosis and treatment of medical conditions by retaining complete details in the fused image. In recent years, numerous image fusion techniques have been proposed and shown the significant progress in the field of medical diagnosis. However, fusion performance of these recent techniques is still prone to distortion, blurring and noises. In order to address these problems, this paper proposes a multimodal medical image fusion technique based on anisotropic diffusion and cross bilateral filter (CBF) via pixel significance. First, the method employs edge preserving processing of the original images where it combines linear low pass filter with nonlinear techniques which allow to select meaningful regions of the source images while edges gets preserved. The selection of those regions is based on morphologically processing of linear filters residuals and aims to find the meaningful regions characterized by edges with appropriate size and high amplitude. An anisotropic diffusion is utilized further to decompose images into base and detail layers. The method further proposes to fuse images through weighted average using the estimated weights from the detail images obtained from both base and detail layers using CBF. Lastly, the final fusion result is generated by linear combination of fused images of both layers. Our proposed method is tested with different pairs of publicly available medical image datasets. Experimental results exhibit that the proposed method shows remarkable performance as compares to other existing state-of-the-art methods in terms of both qualitative and quantitative analysis.


Keywords

Anisotropic Diffusion, Edge Preserving Filter, Image Processing, Medical Image Fusion, Morphologically-Processed Residuals, Cross Bilateral Filter (CBF), Machine Learning, Convolutional Neural Network (CNN)


Introduction

Medical imaging is the significant pillar for clinical decision making and is an important aspect of many patient journeys. Medical images can be used in a variety of clinical applications such as computer aided detection, diagnosis, treatment planning, intervention and therapy. While medical imaging system is still a significant part of a variety of clinical duties, the increasing shortage of experienced radiologists to interpret complicated medical images indicates an apparent demand for reliable automated techniques to alleviate the growing burden on healthcare practitioners. Thus, the development of advanced computation techniques for study of structured data like images is benefiting medical imaging science [1, 2]. The development of techniques for image acquisition, processing and interpretation is driving innovation specially in the fields of reconstruction, registration, fusion, detection, segmentation, tracking, and modelling. Medical images are intrinsically difficult to interpret and requires prior expertise to understand. Biomedical images obtained under a range of acquisition conditions with different protocols can be noisy and incorporate several modality specific artifacts. Medical image processing has been a booming domain of research for more than two decades initially focusing on typical image analysis task including registration, segmentation and contrast enhancement. However, with growth of medical image processing, the field of imaging biomarker discovery has been focusing on transforming functional data into relevant biomarker that can generate insight into a variety of medical conditions [36].
Image fusion is a technique of incorporating details from the multimodal images of same scene to obtain a single fused image which is highly informative when compared to these multiple multimodal images. This study concentrates on the multimodality medical image fusion that combines multimodal medical images from same region of human body obtained using various imaging modalities. In practical, medical images obtained using these techniques contain information that represent the state of the human body such as bone structures, metabolic rate and other details. However, a single image can provide only one kind of human body description. Example, magnetic resonance imaging (MRI) images give anatomical details having high spatial resolution. Computed tomography (CT) images can identify tissues with varying densities i.e. blood arteries and bone structures better than MRI which can reveal soft tissues rather than bone information. Similarly, positron emission tomography (PET) and single photon emission computed tomography (SPECT) images reveals metabolic and functional details however their spatial resolution is low. Different imaging modalities provide significant disparities in images which is complementarity to each other. Thus, the fusing of multiple images from different modalities may be beneficial for medical evaluation since a single fused image presents various types of information, enhance image quality and considerably increases the spatial resolution of images. Radiologists can then utilize this fused image to make a complete diagnosis for patients allowing them to help better treatment of diseases. In conclusion, fusing diverse sets of medical images is critical for diagnosing and treating any disorders. The classic traditional image fusion algorithms include non-subsampled contourlet transform (NSCT), discrete wavelet transform (DWT), dual-tree complex wavelet transform (DT-CWT), etc. [7, 8]. However, these approaches are not very practical at fusing specific input image details and resulting an artifact in images. In recent years, deep learning techniques is making an exceptional progress in numerous image processing and computer vision applications due to the advancement of machine learning. Recently, research based on deep learning are becoming key subject in the domain of image fusion [9]. In recent years, numerous image fusion approaches have been proposed in digital photography such as multisensor and multifocus fusion [10, 11] and multimodality imaging such as infrared and visible image fusion [12] and fusion of medical images [13].
In order to further increase the fusion quality of medical images, this study propose a multimodal image fusion technique based on morphologically processing of residuals via pixel significance using anisotropic diffusion (AD) and cross bilateral filter (CBF). First, the proposed method introduces low pass filter with morphologically processed residuals (LPMPR) for edge aware processing of source images. It integrates linear low pass filtering with nonlinear techniques which allow to select meaningful regions of the images while preserving edges. This process can be used to control the contract further. Then these images are decomposed into base layers and detail layers through AD. Further, it proposes to fused input images by weighted average approach utilizing the weights estimated from detail images obtained from both base and detail layers utilizing CBF. Lastly, the fused images of both layers are integrated together to obtain the resultant fused image. Efficiency of our algorithm is verified through several pairs of medical image datasets. Experiments on the regular test pairs of medical images indicate that our proposed shows better performance as compares to other comparative existing techniques in case of both visual analysis and quantitative performance.
The following are the key contributions of our research articles are:

The paper proposes an efficient medical image fusion framework based on morphologically processing of residuals via pixel significance using AD and CBF capable to perform fusion task effectively. The method performs fusion rule for both high frequency and low frequency components that takes both extraction of detailed features and preservation of structured details into consideration.

The method employs nonlinear filtering that aims to localize and extract meaningful edges that is incorporated in the final image thus preventing artifacts within image regions, thus, it will increase the visual perception ability of fused image.

The proposed fusion approach can be used to perform a controllable edge-aware image blur with contrast improvement which can increase the quality of images with a controlled level of information preserved.

It proposes an effective and easy to implement image fusion framework with low computation costs. This fusion method is effective for both high pass and low pass frequency components, as the proposed fusion methods are presented to extract the detailed features and preserves the structured details, respectively. The proposed fusion rule not only obtains high quality fusion results but also achieves low computation costs.

The remainder of this paper are structured as follows. Section 2 discusses the most significant works in image fusion domain. Section 3 illustrates the details of proposed medical image fusion approach. Section 4 presented an experimental details, results and discussion. Section 5 presented the conclusion the proposed research work.


Related Work

Numerous image fusions approaches have been proposed over the last few decades. A hybrid approach combining curvelet transform (CT) and wavelet transform (WT) has been proposed for fusion of medical images by Agarwal et al. [14]. This method used WT to segments the input images into bands and the segmented images are further fused into sub-bands utilizing the CT that divides the bands to the overlapping tiles and effectively converts curves in images into straight lines. In order to generate a more detailed fusion result, these tiles are combined by inverse wavelet transform. The results demonstrate that the fusion result has minimum errors and produces higher-quality outcomes. In order to further increase the image quality and reduce artifacts in fused image, Bavirisetti and Dhuli [15] introduced an edge preserving fusion approach for visible and infrared sensor images. The input images are first decomposed into approximation and detail layers by anisotropic diffusion. Then Karhunen-Loeve transform and linear super position are used to compute the final approximation and detail layers. Linear combination of final approximation and detail layer images produces a fusion result. This approach leads to significant contrast improvement by maintaining the important image information. For fusing multifocus images, a hybrid technique combining stationary wavelet transform (SWT) and principal component analyses (PCA) has been proposed further [16]. SWT helps to generate features from the input images and the subsequent decomposition image into four sub-bands. Then PCA-based fusion rule is utilized that calculates eigenvectors and keeps the maximum eigenvector of these sub-bands because eigenvector represent images optimally. According to the evaluation parameters, this method has the ability to improve image information and aid in the elimination of artefacts resulting in superior visual perception.
The fusion method on infrared vein transmitting technique utilizing multilight intensity imaging is proposed in [17]. Using multilight intensity imaging, an infrared transmitting technique is presented based on the observed finger vein images. This technique is computed from multiple pixel values under several light intensity in the same scene. The vein images of finger obtained the system will be normalized and the vein pattern of the biometric information from the tested human’s body will be preserved intact due to the fusion process used in the biometric system. Multiscale fusion approach related to visual saliency map (VSM) and weighted least squares (WLS) optimization was further introduced with the goal of overcoming some of the shortcomings of conventional fusion methods [18]. In order to decompose input image into base and detail layers, this technique employs the multiscale decomposition that employs rolling guidance filtering (RGF) and the Gaussian filter. This can obtain the unique property to preserve the specific scale details while decreasing halos near edges. Furthermore, the fusion result details of this method produce more naturally and are more appropriate for human visualization. For different types of multimodal images, an image fusion framework with fast filtering in spatial domain is proposed [19]. At initial, the magnitude of the image gradient is employed to identify contrast and sharpness of image. The image gradient magnitude is then used to perform a fast-morphological closing operation to fill holes and bridge gaps. A weight map generated from the multiple image gradient magnitude and is filtered using fast structure preserving filter. Lastly, a weighed-sum rule is used to generate the fusion result. This method facilitates to provide natural appearance of the multimodal images. In medical image fusion, a Siamese convolutional network (SCN) is used to obtain a weighted map that combines the pixels activity details from two input images [20]. To be more consistent with human visualization, the fusion technique is carried out on multiscale manner using image pyramids. In addition, the fusion technique for decomposed components is adaptively adjusted using a local similarity-based technique. Experimental results show that this technique can produce optimal performance in case of both objective and subjective evaluation.
A novel multisensor fusion method has been presented further to enhance human activity performance and lower the rate of misrecognition [21]. To combine predicted values from several motion sensors, this method presents a multi-view ensemble algorithm. In addition, computationally efficient classification techniques like decision trees, k-nearest neighbors and logistic regression are employed to execute flexible, diverse and dynamic human activity detection techniques. This strategy has the potential to improve recognition accuracy by utilizing distinct feature properties of each sensors and multiple classifier systems. A simple and fast fusion technique base upon guided filters was introduced in [22]. Multiscale image decomposition, visual saliency detection, structure transferring property and weight map construction are employed to combines relevant image information to the final fusion result. The fusion analysis shows that this technique is very optimistic and takes short time to implement. The real time fusion of images approach that uses pre-trained neural network to produce a single image that contains feature from multimodal images [23]. A technique based on deep feature maps extracted from the convolutional neural network (CNN) is utilized to fuse the images. Fusion weights that drive the multimodal fusion of images approaches are generated by comparing these feature maps.
Lu et al. [24] proposes numerous kinds of coupled fusion image technique related to coupled matrix and tensor factorization optimization and flexible coupling technique also known as coupled image factorization optimization and the modified flexible coupled technique respectively. Experimental results reveal that the result of CIF-OPT approach is improve under the influence of various noises. In particular, CIF-OPT technique can be properly reconstruct an image without losing important details. However, previous methods are prone to color distortion, blurring and noises. A Laplacian re-decomposition (LRD) technique is proposed to multimodality medical image fusion to solve these problems [25]. There are two technical innovations with this method. First, this method introduced a Laplacian decision graph decomposition technique with image enhancement to acquire complementary details, redundant details and low frequency sub-band images. Then, to consider the heterogeneous features of the redundant and complementary details, this method introduces the concept of overlapping and non-overlapping domain, where the overlapping domain facilitates the fusion of redundant details whereas the NOD is responsible for the fusion of complementary details. However, because of the degradation of medical images during the acquisition stage, medical image fusion still remains a difficult task. To overcome this problem, Polinati and Dhuli [26] proposes a fusion approach that reduces distortion utilizing empirical wavelet transform (EWT) representation and the local energy maxima fusion approach to combine complementary information from multimodal imaging techniques. Depends of the nature of the source images, the basic functions of EWT are optimally selected. It helps to preserve important details such as edges which is important in image fusion operation.
Subbiah Parvathy et al. [27] proposes a deep learning-based fusion concepts based on optimal thresholding. An optimal threshold of fusion algorithms in Shearlet transform (ST) is computed using enhanced monarch butterfly optimization (EMBO). The extraction portion of the deep learning technique was then used to fuse low and high frequency sub-bands based on feature maps. Fusion process was carried out using a restricted Boltzmann machine (RBM). Lepcha et al. [28] proposes a fusion approach to fuse medical images utilizing RGF and CBF. This method uses CBF for grey-level similarities and geometric closeness of neighborhood pixels without smoothing edges. Further for scale-aware operation, these detail images are processed through a RGF. This filter removes small-scale structures and retains other contents of the source images and efficiently preserves the edge. Then weights are estimated by computing the strength of details of both base and detail layers obtained through subtraction of CBF outputs from the morphologically processed images. These estimated weights are directly multiplied with original images to generate fused images of both base and detail layers separately. Lastly, a linear combination rule is introduced to produce final fusion result. In order to retrieve image optimization, lower computational cost and time, Jose et al. [29] proposes a novel multimodal approach for medical fusion of images related to adolescent identity search approach for non-subsampled Shearlet transform (NSST). In general, NSST is a multiscale and multidirectional wavelet transform that is multi-directional and multi-dimensional. Goyal et al. [30] recently proposes a multimodality medical image fusion approach that combines low-resolution multimodal medical images with less computing complexity to increase target recognition accuracy and provide a foundation for clinical applications. At initial, the salient structure extraction algorithm uses RGF to the source images in order to remove small-scale structures by preserving textures and simultaneously recovers prominent edges. An image gradient operator is than applied to the filtered images to recovers large-scale structures. Domain transform filter (DTF) is utilized to retain small-scale structures in the vicinity of large-scale structures. An output of DTF is further utilized as weighted maps which is than combine with source images for obtaining fused image using weighted sum rule. Kaur and Singh [31] proposes a medical image fusion that decompose images into sub-bands utilizing the NSCT domain. Then, an utmost variant of inception is utilized to extract the features from input images. This method choses via multi-objective differential evolution. The fused coefficients are then obtained using the coefficients of determination and energy loss basic fusion function. Lastly, an inverse NSCT is used to obtained fusion result.
In order to retrieve image optimization and decrease the computation cost, a multimodality fusion technique for medical images that combines the benefits of NSCT and fuzzy entropy is for clinical applications [32]. This approach increases the accuracy of the target recognition and quality of medical images to the great extant. Recently, a fusion method based on two-scale image reconstruction was proposed by Hu et al. [33]. In order to fuse base layers containing large-scale structures details, an enhanced guided image filter based weighted average rule employing Gabor energy is proposes in their method and sparse representation related separable dictionary learning is introduced for capturing small scale data of the detail layers, Lastly, the fusion base and detail layers are integrated utilizing texture enhancement fusion approach to generate the fusion result. Chen et al. [34] introduced a medical image fusion method to preserves the structural and detail information of source images. At first, RGF is utilized to decompose source image into structural and detail parts. The structural part is fused utilizing Laplacian pyramid based fusion method. The sum modified Laplacian based fusion method is used to fuse detail parts. Lastly, the final fused image is obtained by combining the fused structural and detail parts together. The results indicate that this approach outperforms numerous other recent fusion approaches.


The Proposed Medical Image Fusion Rule

In this paper, an efficient medical image fusion framework based on morphologically processing of residuals via pixel significance using AD and CBF is proposed. The framework of our algorithm is shown in Fig. 1. The proposed fusion algorithm mainly consists of four basic steps: (1) first, an edge preserving processing of original images using LPMPR. It combines linear low pass filter with nonlinear techniques which allow to select meaningful regions of the original images while preserving edges. (2) Then, AD is used that decomposes images into base and detail layers. (3) Further, it proposes to fuse images using weighted average utilizing estimated weights from the detail images obtained using CBF. (4) Lastly, a linear combination rule is used to fused images of both layers to obtain final fusion result. The details of our method are illustrated as follows.

Fig. 1. Framework of proposed methodology.


Low Pass Filter with Morphologically Processed Residuals (LPMPR)
This section discusses the LPMPR [35] in detail. It is based on the RGF, denoted as:

$I_lf=A * L$(1)

where A stands for the original image, L stands for the mask of the Gaussian filter (or any other linear low pass filter) and * is a convolution operator.
The residual of linear filter is denoted by

$Res(A) = A - I_lf$(2)

In order to further process the residual is based on operators that are defined on positive valued images. Due to this fact, the $Res(A)$, that consist of both positive and negative values are spilt into two fractions, negative and positive:

$I_{res} + = 0.5 (Res(A)+|Res(A) |) ; I_{res} - = 0.5 (|Res(A) |-Res(A))$(3)

The fractions of the residual fulfill the following clear relation:

$Res(A)=(I_{res}+)-(I_{res}-)$(4)

Here we are computing for source image $A$ only; similarly, we can compute for source image B using same procedures.
Depending on the amplitude of the residuals, both fractions of the residual ($I_{res}+,I_{res}-$) is a further processed to cutoff irreverent variations of the residual while preserving important ones. The procession depends on morphological operator $ℳ$ which is based upon a reconstruction that chooses meaningful residual portions while preserving their original structures. Lastly, important regions of the residual are applied to the filtered images to recover relevant edges while an image remains blurred within irreverent image regions:

$I_{out} = I_{lf} + M (I_{res+})- M(I_{res-})$(5)

A function $ℳ$, a similar for both residual and is given as follows:

$ℳ(I)= R_I (min (I, S_t (I) │_{min{I},max{I}})),$(6)

where $R_I (G)$ denotes the morphological reconstruction of the gray level mask image $I$ from markers $G$, “|” denotes the map function which converts image into a gray level image by replacing primary 1’s and 0’s with two given ones. Lastly, the “min” operator is a pointwise lowest operator for two images. A binary marker S which consists important and meaningful regions in which contrast should be preserved are the most significant element of $ℳ$ (Equation 6). An amplitude of the residuum determines the choice of these regions:

$S_t (I)=(I≥t)$(7)

$S$ is a selection operator that extracts region of I having amplitude higher than the given threshold $t$. In [35] shows the example of filtering of image utilizing the method mentioned above. It shows the result of filter of the test images having different t values. Furthermore, the binary mask (markers of relevant areas) is drawn in white (such as positive binary mask: $S_t(I_{res+})$) and black (such as negative binary mask: $S_t(I_{res+})$) with grey indicating areas in which both masks is equivalent to zero. The growth of a threshold effects in a smaller no. of detected areas which is further restored and utilized to reconstruct the original contents of the images. Lastly, the number of areas in which the original sharpness is reconstructed is reducing.

Selection of meaningful regions using a size criterion
In the workflow of the technique, the boundaries of meaningful regions are found in binary masks retrieved by thresholding. The amplitude of residuals is used to estimate the “meaningfulness.” However, amplitude is not only aspect which defines the significant of image regions. One can easily describes image elements having high amplitude of residuals which is not significant for understanding of images. For instance, adding salt and pepper noise in the source images produces plenty of small elements of high amplitudes in the source images and accordingly modify the residual images by addition of higher amplitude components. Thus, it will be detected as relevant areas that is not desirable.
The additional step has been included in the approach to address above problem. The area opening filter [36] is used to filter the binary mask. This filter eliminates from the image; all connected elements of size less than that of given size threshold s, i.e., size coefficient. Thus, Equation (7) is expended as

$S_(t,s) (I)=(I ≥t) o(s),$(8)

where $o(s)$ indicates the area opening that removes the components smaller than the given size defined by $s$ (number of pixels in the connect components of the threshold fraction of residual). We can notice that when the coefficient s and t increases, the number of selected regions decreases. It permits thus reject small in case of number of pixels and the object from the residual even in their amplitude is high. Lastly, it enables to keep these portions blurred on resultant image

Contrast control
An addition (or subtraction―depends on filter mask components) of high pass filter from the images is a classic technique for image contrast enhancement. It is related to the high pass filtering property, which detects the local variations of image pixel values. Another method of obtaining high pass filtering results is to compare the divergence between the low pass filter and the image itself. A morphologically processed residual in the proposed method refers to image frequency elements with amplitude areas above the threshold t. A contrast control coefficient c is shown in Equation (5) to control an output image contrast resulting in the adjusted formula being as follows

$I_{out} = I_{lf} + (M(I_{res+}) - M(I_{res-})). c.$(9)

A contrast is either increased (c ˃ 1), preserved (c = 1) or decreased (0 ˂ c ˂ 1) depending on c. As shown in [35], which gives an example of how the contrast control of the coefficient influence a processing output. In addition, the effect of contrast increment without residual morphological proceeding is demonstrated. When comparing the images in [35], it is possible to observe how morphological processing influences the amount of small-scale information visible in the final image. The proposed method allows for the rejection of information having less importance leaving only the contrast of important areas to be improved.

Anisotropic Diffusion
This section discusses the AD [15] that decomposes edge preserved images (as discussed in Section 3.1) into base and detail layers.
Let the edge preserved images of original images A and B is 〖{I_out1〗_n(m, n)}_(n=1)^N and 〖I_out2〗_n(m, n)}_(n=1)^N having size of a × b and are all co-registered. These images are then processed using anisotropic diffusion in order to obtain base layers.

$P_n (m,n)=aniso (I_{out1_n} (m,n))$(10)

$Q_n (m,n)=aniso (I_{out2_n} (m,n))$(11)

where $P_n (m,n)$ and $Q_n (m,n)$ is the nth base layers and aniso $(I_{out1_n} (m,n))$ and a aniso $(I_{out2_n} (m,n))$ denotes anisotropic diffusion procedures on the nth source images (as discussed in [15]). The detail layers are generated by subtraction of base layers from the edge preserved images

$R_n (m,n)=I_{out1_n} (m,n)-P_n (m,n)$(12)

$S_n (m,n)=I_{out2_n} (m,n)-Q_n (m,n)$(13)



Cross Bilateral Filter (CBF)
CBF [37] is a nonlinear and non-iterative technique which combines low pass filtering and edge stopping function where it reduces the kernel of filters if the magnitude difference between the pixels is higher. The filter weights consider the Euclidean distance and depend on the distance of color or gray space. Whereas both level of grey similarity and geometric closeness of neighborhood pixels is considered. Benefits of this filter is that it smooths the images through preserving the edges by considering neighborhood pixels.
Computationally, for an image P_n (in Equation (10)), the resultant of bilateral filter (BF) at the pixel location a will be computed as follows [38]:

$P_{n_f}(a) = \frac{1}{w} \displaystyle\sum_{b=s} Gσ_s(||a-b||) Gσ_r(|P_n(a)-P_n(b)|)P_n(b)$(14)

where $Gσ_s (|(|a – b|)|)$ is the geometric closeness function, $Gσ_r (|P_n (a)-P_n (b)|)$ is the gray level similarity or edge stopping function $W=∑_(b∈S@.)Gσ_s (‖a-b‖) Gσ_r (|P_n (a)-P_n (b)|)$ is a normalization constant. $||a–b||$ is a Euclidean distance between a and b; $S$ is a spatial neighborhood of a.
Similarly, we can calculate $Q_{n_F}, R_{n_F}$ and $S_{n_F}$ using Equation (14) from corresponding images in Equations (11), (12) and (13). Here, $σ_s$ and $σ_r$ controls the behavior of BF, dependency of $σ_r/σ_s$ and the derivative of input images on the behavior of bilateral filter has been analyzed in [37]. A value of σ_s has been selected based upon the optimal output of the low pass filter and blur more for $σ_s$, because it combines values from a more distance location. Further, if the images are scale down or scale up, $σ_s$ must be modified to get optimal result. It appears that the appropriate better range for $σ_s$ is roughly (1.5–2.1). Similarly, an appropriate vale for $σ_r$ rely on amount of edges to be preserve accordingly. If in case of images gets amplified/attenuated, $σ_r$ must be modified to obtain the same output.
Cross bilateral filter considers both grey-level similarities and geometric closeness of neighborhood pixel in P_n (in Equation 10) to shape filter kernel and filters image $Q_n$ (in Equation 11), CBF result of image $Q_n$ at pixel location a is calculated as [39]:

$Q_{n_{CBF}} (a)=1/W \displaystyle\sum_{b⋲S} Gσ_s (‖a-b‖) Gσ_r (|P_n(a)-P_n(b)|) Q_n(b)$(15)

where $Gσ_s (‖a-b‖)$ is the geometric closeness function, $Gσ_r (|P_n (a)-P_n (b)|)$ is the gray level similarity/edge stopping function, $W=\displaystyle\sum_{b⋲S} Gσ_s (‖a-b‖) Gσ_r (|P_n(a) - P_n(b)|) Q_n(b)$ is a normalization constant.
Similarly, we can calculate $P_{n_{CBF}}, R_{n_{CBF}}$, and $S_{n_{CBF}}$ using Equation (15) from corresponding images in Equations (10), (12) and (13). The detail images can be obtained after subtracting output of CBF from the respective images. For an images $P_{n,}, Q_{n,}, R_{n,}$, and $S_{n,}$ is given by $P_{n_D} = P_n P_{n_{CBF}}, Q_{n_D} = Q_n Q_{n_{CBF}}, R_{n_D} = R_n R_{n_{CBF}}$, and $S_{n_D} = S_n S_{n_{CBF}}$, respectively (as shown in Fig. 1). In medical images, the unfocused portion of the image $P_{n,}$ shall be focused in image $Q_{n,}$ and task of CBF of $Q_{n,}$ shall blur the focused portion in case of a non-focused portion on image $Q_{n,}$. Basically, due to non-focused portion of the image $P_{n,}$, though seems blur with nearly the same gray values in specific portions by making kernel of the filter similar to Gaussian. An idea is to retrieves more focused portion region of the detail image $Q_{n_D}$ which can be utilized to get the weights for fusing images by applying a weighted average. Similarly, details of image $Q_{n,}$ is not present on the image $P_{n,}$, and the application of the cross bilateral filter on the image $Q_{n,}$ shall blur the details in image $Q_{n,}$. Since the details in image $P_{n,}$ is absent, the grey level in the portion has a same information there by making the kernel look alike as Gaussian (as mentioned in details in [37]).

Pixel based Fusion Rule
Fusion rule specified in [40] has been considered here for the completeness for computing the effectiveness of the algorithm. The weights are calculated utilizing statistical features of neighboring detail coefficients rather than wavelet coefficient. A window size of $w × w$ around the detail coefficients $P_{n_D} (m,n)$ or $Q_{n_D} (m,n)$ or $R_{n_D} (m,n)$ or $S_{n_D} (m,n)$ is used as the neighborhood for the computation of its weights. This neighborhood was indicated as matrix $K$. Each row of matrix K are considers as observation and the column as the variable to compute unbiased estimate $C_h^{m,n}$ of its covariance matrix [41], since $p$ and $q$ are spatial coordinates representing detail coefficients $P_{n_D} (m,n)$ or $Q_{n_D} (m,n)$ or $R_{n_D} (m,n)$ or $S_{n_D} (m,n)$.

$covariance (K)= E [(K - E[K]) (K - E[K]T] $(16)

$C_h^{m,n}= \frac{∑_{k=1}^w (f_k- \overline f) (f_k- \overline f)^T}{(w-1)}$(17)

where $f_k$ is the $k^{th}$ observation of w-dimensional variable and $\overline f$ represents observation mean. It has found that matrix diagonal $C_h^{m,n}$ provides a vector of the variances for each matrix column $K$. An eigenvalue of matrix $C_h^{m,n}$ is computed and number of eigenvalues directly depend on the size of $C_h^{m,n}$. Sum of these eigenvalues is directly based on horizontal detail strength of the neighboring that is indicated by HdetailStrength. Similarly, a non-biased covariance calculation of the image $C_v^{(m,n}$ is calculated by making each matrix column $K$ as an observation and the row as variable (i.e. opposite of that of $C_h^{m,n}$) and the addition of the eigenvalues of $C_v^{(m,n}$ provides the VdetailStrength. It is thus expressed as follows:

$HdetailStrength (m,n) = ∑_{k=1}^w eigenk of C_h^{m,n}$
$VdetailStrength (m,n) = ∑_{k=1}^w eigenk of C_v^{m,n}$ (18)

Here $eigen_k$ is a k^th eigenvalue of an unbiased estimate of the covariance matrix. Further, weights of specific detail coefficients have been calculated by adding the two respective detail strength. In particular, weights are generally depended on detail strengths not on real values of intensity

$WT(m,n) = HdetailStrength (m,n) + VdetailStrength (m,n)$(19)

In this case, after calculating the weights of coefficients of the details based on both input images in above algorithms, the average weights of the input images will be present in the final fusion result.
Here $WT_p$ and $WT_q$ are detail coefficients from the input $P_{n_D}$ and $Q_{n_D}$ which belongs to the respective images $P_{n,}$ and $Q_n$. Also, $WT_r$ and $WT_s$ are the detail coefficients from the input $R_{n_D}$ and $S_{n_D}$ belongs to respective images $R_{n,}$ and $S_n$. Then the weighted average of both detail and base layers is computed to obtained fused images as mentioned in Equations (20) and (21) as

$K(m,n) = \frac{A(m,n) WTp (m,n)+B (m,n) WTq(m,n)}{WTp(m,n)+WTq(m,n)}$(20)

$L(m,n) = \frac{A(m,n) WTr (m,n)+B (m,n) WTs(m,n)}{WTr(m,n)+WTs(m,n)}$(21)



Linear Combination
The resultant fused image F is obtain using linear combination of fused images of both base and detail layers.

$=K(m,n)+L(m,n)$(22)


Empirical Study

In this experiment, four pairs of multimodal medical images are utilized to assess the effectiveness and efficient of our method with other methods. Three pairs of CT and MRI images (Dataset 1–3) and one pair of MRI and PET images (Dataset 4) are utilized to test the performance. Spatial resolution of each representative images is set at 256×256 pixels. The source images are presented in Fig. 2. Medical images pairs are obtained from [42, 43]. Entire experiments are conducted in MATLAB 2019b and run on Intel Core i7-4790 @3.6 GHz with 8.00 GB RAM. Further, to evaluate the effectiveness of proposed method, seven mainstream image fusion techniques is utilized for comparisons such as AD via Karhunen-Loeve transform [15], multifocus image fusion utilizing SWT and PCA [16], MGIVF [22], fast filtering image fusion (FFIF) [19], CNN [20], fusion of visible and infrared images using VSM and WLS optimization [18] and zero learning medical fusion algorithm (ZLMIF) [23], respectively with the same simulation parameters presented in their respective algorithms. According to [37], the parameter values are set as σ_(s,) = 1.8, σ_(r,) = 25 and kernel size n = 11 for all the medical image dataset pairs. This method used neighborhood window of 11×11 for CBF and neighborhood window of 5×5 for finding the detail strengths. The details of influence of parameter are analyzed accordingly. Furthermore, in [35] for texture filtering parameters set as amplitude threshold t = 0.15, size of threshold s = 5, contrast efficient c = 1.2 and sigma of the Gaussian filter = 72 and in AD [15], iteration are set to 3.

Fig. 2. Source images for comparative experiments. (a) Dataset 1 (Pair I), (b) Dataset 2 (Pair II), (c) Dataset 3 (Pair III) are CT and MRI source image pairs, and (d) Dataset 4 (Pair IV) are MRI-PET source image pair, respectively [42, 43].


Objective Evaluation Metrics
To demonstrates the performance of the fusion results of comparative algorithms, the four standard evaluation metrics has been selected to perform the experimental evaluation. In this paper, four objective metrics have been chosen as the evaluation criteria: information performance-based parameter$ Q_x^{pq/f}$ metric, standard deviation (SD), average gradient (AG) and average pixel intensity (API) as discussed on details in [37]. The higher values of these four metrics indicate the superior quality of the fusion results. These metrics as shown below in detail:
(1) An edge information performance-based parameter $Q_x^{pq/f}$ metric is used for estimates the fusion result. A performance parameter $Q_x^{pq/f}$ has been introduced for perceptual evaluation. $Q_x^{pq/f}$ quantifies an overall information transfers from original images into fused images. The edge detail preservation is mentioned as $Q^{pf}=Q_g^{pf}Q_α^{pf}$, where Q_g^pf and $Q_α^{pf}$ are the strength of the edges and orientation preservation from source images to fused images.
An evaluation of the performance based on the normalized weight is given by

$Q_x^{pq/f} = (\frac{\displaystyle\sum(Q^{pf}w^a + Q^{qf} w^b)}{\displaystyle\sum (w^a+w^b)}$(23)

where $w^a$ and $w^b$ are the coefficients of weights $Q_g^{pf}$ and $Q_α^{qf}$, respectively

(2) API or mean $\overline K$ which calculate an index of contrast and denoted as

$API = \overline K = \frac{\displaystyle\sum_{a=1}^p \displaystyle\sum_{b=1}^q f(a,b)}{pq}$(24)

where f(a,b) is the pixel intensity and p×q indicates size of the input image

(3) SD known as square root of variance and represents the spread in data and is denoted as

$SD = \frac{\sqrt{\displaystyle\sum_{a=1}^p \displaystyle\sum_{b=1}^q (f(a,b) - \overline K)2}}{pq}$(25)

(4) AG quantifies the degree of clarity and sharpness and is denoted as

$AG = \frac{∑_p ∑_q ((f(p,q)-f(p+1,q))^2+(f(p,q)-f(p,q+1)^2)^\frac{1}{2}}{ab}$(26)

Experimental Results and Discussion
In order to demonstrate the effectiveness of different fusion techniques, we used four pairs of medical image dataset pairs as shown in Fig. 2. For clarity, the four pairs of medical datasets in Fig. 2 are labelled as Pair I–Pair IV. Four quantitative indicators, API, STD, AG, and Q^(pq/f) are used to verify the objective evaluation of the fusion findings. Figs. 3–6 shows the combined results of MRI, CT, and PET using eight methods (including our method). As shown in Fig. 3, the fusion findings produced by SWT and ADF suffer from low contrast in the Pair I image set, making it challenging to observe tissues in the brain. The CCN and MGVIF approach, on the other hand, does a better job of preserving information, however the edge features in close up are blurred and their contrast is not optimal. FFIF and VSM fusion result clearly shows structural information and possess adequate contrasts level, however specific details and contrast information are lost in some locations as seen in the close-up and bottom of soft tissues. The proposed method and ZLMIF based fused images can overcome the afore-mentioned problems and extract the general features of original images extremely well. In case of Pair II: SWT, ADF, MGIVF, CCN and VSM lose the bone structural information. FFIF, ZLMIF and our algorithm have a strong contrast, however output retrieved by the FFIF and ZLMIF loses soft tissue information (see in Fig. 4). Looking at the Pair III and Pair IV fusion findings, we can observe that FFIF, VSM and ZLMIF performs better and able to retains information of two input images then ADF, SWT and CNN however the poor contrast still remains. The bone structures are also more visible in our method if we focus on output of Pair IV medical images closely. Table 1 demonstrates the objective comparison of the various fusion approaches of four different pairs of MRI, CT and PET image pairs. The values in red indicates the high values or appropriate results for each objective metric and the blue color indicate second best results.

Fig. 3. Fusion experiments, (a) and (c) denotes CT and MRI source images (Dataset 1), respectively. (b) and (d) denotes local detailed images of (a) and (c), respectively. (e), (f), (g), (h), (i), (j), (k) and (l) denotes fused results of ADF, SWT, FFIF, CNN, MGIVF, VSM, ZLMIF, and proposed, respectively. (m), (n), (o), (p), (q), (r), (s) and (t) denotes local detailed images of (e), (f), (g), (h), (i), (j), (k) and (l), respectively.


Fig. 4. Fusion experiments, (a) and (c) denotes CT and MRI source images (Dataset 2), respectively. (b) and (d) denotes local detailed images of (a) and (c), respectively. (e), (f), (g), (h), (i), (j), (k) and (l) denotes fused results of ADF, SWT, FFIF, CNN, MGIVF, VSM, ZLMIF, and proposed, respectively. (m), (n), (o), (p), (q), (r), (s) and (t) denotes local detailed images of (e), (f), (g), (h), (i), (j), (k) and (l), respectively.


Fig. 5. Fusion experiments, (a) and (c) denotes CT and MRI source images (Dataset 3), respectively. (b) and (d) denotes local detailed images of (a) and (c), respectively. (e), (f), (g), (h), (i), (j), (k) and (l) denotes fused results of ADF, SWT, FFIF, CNN, MGIVF, VSM, ZLMIF, and proposed, respectively. (m), (n), (o), (p), (q), (r), (s) and (t) denotes local detailed images of (e), (f), (g), (h), (i), (j), (k) and (l), respectively.


Fig. 6. Fusion experiments, (a) and (c) denotes CT and MRI source images (Dataset 4), respectively. (b) and (d) denotes local detailed images of (a) and (c), respectively. (e), (f), (g), (h), (i), (j), (k) and (l) denotes fused results of ADF, SWT, FFIF, CNN, MGIVF, VSM, ZLMIF, and proposed, respectively. (m), (n), (o), (p), (q), (r), (s) and (t) denotes local detailed images of (e), (f), (g), (h), (i), (j), (k) and (l), respectively.


Table 1 shows that nearly in all metrics of proposed algorithm gets better results when compares to other seven algorithms. Whereas in few cases, the values of proposed method get slightly higher than those obtained through different approaches, however it still manages to obtain better results. Further, the values of these evaluation indexes are provided in Table 1 in order to show more intuitive grasp of objective evaluations comparison of different methods. As observed, the proposed technique outperformed the seven comparative methods in terms of overall performance. The performance metrics Q^(pq/f), SD, AG and API metrics are shown in Table 1. The performance of fusion result is superiors if Q^(pq/f) have a high value. A purpose of image fusion is to increase complete, adequate and appropriate details such that the fusion result is highly appropriate for human visualization. Visual analysis is also necessarily significant along with objective evaluation to verify the performance of fusion methods. In order to present he effectiveness visually, the fusion results of all comparative algorithms (including proposed method) is presented in Figs. 3–6. It has been observed through the images that the fusion results obtained through our algorithm performs better for all medical dataset pairs. It makes a direct process of checking the other algorithms that is comparatively low values in measured parameters. With the purpose to evaluate the performance and demonstrating the robustness of our method, we simultaneously observe the effects of change in outcomes with respect to kernel size (k), σ_(s,) and σ_(r,) w.r.t the variation in the Q^(pq/f), API, SD and AG metrics for several pairs of medical image dataset pairs practically. We observed that the performance for different value of k, σ_(s,) and σ_(r,) and its effects on the efficiency of our fusion algorithm.

Table 1. Performance metrics comparison of different algorithms
Medical dataset pairs Metrix ADF [13] SWT [16] FFIF [19] CNN [20] MGIVF [22] VSM [18] ZLMIF [23] Proposed
Dataset 1 Qpq/f 0.4935 0.4448 0.7193 0.6822 0.7063 0.7189 0.7169 0.7233
(Pair I) API 32.2283 46.8373 55.7762 54.8272 57.3441 57.7733 0.5682 58.3314
SD 35.7083 40.8832 52.9827 52.1726 61.8363 62.7725 65.8822 66.9955
AG 7.1365 8.7762 9.7762 8.8272 9.8862 10.8836 10.8816 11.4169
Dataset 2 Qpq/f 0.5247 0.4453 0.6426 0.6452 0.5732 0.5877 0.6281 0.6663
(Pair II) API 43.3268 41.9837 42.9933 41.8726 42.8837 43.9837 42.9981 44.0435
  SD 46.1332 50.8822 49.8827 56.8872 55.8837 58.1837 58.1182 60.2744
  AG 8.3986 8.8826 10.8833 9.8266 9.8872 10.2837 10.3615 11.0304
Dataset 3 Qpq/f 0.4702 0.5107 0.5529 0.5425 0.5595 0.5695 0.5675 0.5812
(Pair III) API 40.6946 42.9837 43.9387 47.9928 45.8828 49.7725 48.9172 50.5658
SD 61.0803 63.8836 65.9837 69.8262 68.8762 66.9372 70.8761 73.9264
AG 7.4811 8.8726 9.8826 9.8827 11.2441 10.9827 10.8826 12.0838
Dataset 4 Qpq/f 0.4716 0.5059 0.4545 0.5392 0.5472 0.5288 0.5561 0.5792
(Pair IV) API 27.9084 29.9927 30.8872 31.9282 29.8992 32.8372 31.9717 33.2579
  SD 41.0684 43.9992 41.8822 44.8272 43.9937 45.8822 45.8272 47.3867
  AG 7.2928 8.8836 9.8862 8.8272 8.8826 9.8837 10.1102 10.7006
Values in red indicates the high values or appropriate results for each objective metric and the blue color indicate second best results.

The proposed parameters in case of robustness vary for the given different values of k, σ_(s,) and σ_(r,) in a wider range. We observed by changing the value of k, σ_s, σ_(r,)and its corresponding effects on the results of our fused images. Our method in case of the robustness varies with different value of the k, σ_(s,) and σ_r. Along with the consequence of the k, σ_(s,) and σ_r, we also observed by changing the iteration, amplitude threshold (t), size of threshold (s) and contrast efficient (c) for all given filters and their corresponding effects on the fusion performance. It has been observed during the experiments that the performance of fusion metrics was higher for given above values of k, σ_s, σ_r, iteration, amplitude threshold (t), size of threshold (s) and contrast efficient (c) and however the performance get drops if we increase/decrease the set (fixed) value of parameters continuously further during the experiments.
As far as the objective evaluation in terms of metrics is concerned, the information transfer rate is more in case of all the medical images of our method compared to other comparative algorithms. In addition, comparing to other methods, the Q^(pq/f) factor is more for our method in the case of all four medical datasets pairs. It observed that the values for non-reference-based metrics for the proposed method is quite significant to the values obtained from other fusion approaches. Similarly, the visual performance of our method is far better than other comparative methods. This can be attributed the fact that proposed approach employs a hybridized algorithm for normalized weight calculation. The efficient texture smoothing filter and anisotropic diffusion filter helps in calculating the optimized weights for generating enhanced fusion results. Besides this, the visibility increases for our proposed method indicating minimum loss and maximum transfer of information. On the other hand, the loss of information and presence of noise is higher in fused images in other comparative methods
In terms of reference-based metrics, the values of standard deviation, average pixel intensity and average gradient metrics are better for the proposed method compared to fusion results produced by other methods in all the dataset pairs. The value of evaluation metrics for our algorithm is higher in case of all the dataset pairs than other methods. Hence, we can say that our method outperformed comparative methods both in case of objective and qualitative results. Our method is able to transfers adequate details from multimodal original images into final fused image and retrieves small scale structures from the input images in the neighboring of the large scale structures adequately. It has been observed through the results of objective evaluation matrices that proposed algorithm generates higher values over other algorithms. The measurement of Q^(pq/f) factor has been substantiated for the light of the perception evaluation and also it is one of the extensively used parameter for the image fusion. In comparative experiments for all pairs of images, CT-MRI fusion and MRI-PET uses gray-level image pairs. For each method, average processing time are presented in Fig. 7. It demonstrates that the processing time for our method is less than other comparative algorithms. Also, our method is simple and consumes significantly less time to perform as compares to other methods. Further, the other algorithms show less effective in all aspects as compares to our method. The processing time of CNN and ZLMIF is almost double as compare to our method. The other methods such as FFIF and VSM almost needs more processing time as well. The computing efficiency of our method can be used for the variety of applications in practice.

Fig. 7. Processing time comparisons.


Conclusion

In this paper, the study proposes an effective multimodal medical image fusion technique based on morphologically processing of residuals via pixel significance using AD and CBF. The proposed method comprises of four steps. First, an edge preserving processing of original images is performed using LPMPR where it removes the texture patterns and preserve the essential structures of the original images. Then, AD is used to decompose images into base and detail layers. Further, the method proposed to fuse images by the means of weighted average utilizing the weights estimated from the detail images obtained from both base and details layers using CBF. Finally, a linear combination rule is used to add fused base and details layers images to obtain a final fusion result. The performance of our method shows huge improvement in the quality of images as compares to other state-of-the-art methods. Along with visual performance, our method outperforms other algorithms in terms of objective evaluation metrics which indicates better results. The proposed method not only obtains high performance in terms of all aspects but also achieved low computational costs. This method is able to extracts more structured and detailed information from original images thereby shows better visual perception ability. According to the experimental results, our method obtains better fusion results in overall aspects but it does not achieve excellent performance in human visualization which is the limitations and can be improved further in future research works. The proposed method can be improved by applying other sophisticated filters which are left for the future purpose of the research work. In addition, medical image fusion and the importance of our fusion approach carry huge potential for improvement by applying other techniques of image fusion to further reduce the artifacts and noise. Also, it is quite possible to implement the multimodal image fusion by proposing various algorithms in the future.


Author’s Contributions

Conceptualization, AD, BG. Investigation and methodology, DCL, AD, JCS. Writing of the original draft, DCL. Writing of the review and editing, AZ, BG, DK, HH. Software, DCL, AD. Validation, AD, BG. Data curation, AZ, HH. Visualization, JSC.


Funding

This work was supported by Taif University Researchers Supporting Project (No. TURSP-2020/114), Taif, Saudi Arabia.


Competing Interests

The authors declare that they have no competing interests.


Author_Biography

Author
Dawa Chyophel Lepcha received his bachelor’s degree in Electronics and Telecommunication Engineering from University of Mumbai, India and his master’s degree in Electronics and Communication Engineering from Chandigarh University, Punjab, India. He received his Master of Science (MS) in Electronics and Information Technology from University of South Wales, Wales, United Kingdom. He is currently pursuing his PhD in Electronics and Communication Engineering at Chandigarh University, Punjab, India. He is presently working on biomedical image processing. His research interests include image processing, signal processing, machine learning and medical imaging

Author
Ayush Dogra is currently working as CSIR-NPD fellow at CSIR- CSIO Lab at Chandigarh, India. He received his bachelor’s degree in Electronics and Communication Engineering from Guru Nanak Dev University, Amritsar, India and master’s degree from Punjabi University, Patiala, India. He has also received his master’s degree in Business Management (MBA) from IGNOU, Delhi. He obtained his PhD in Electronics and Communication Engineering from Punjab University, India. His doctoral research focuses on devising a novel and innovative, market-oriented mechanism for medical image processing. He has published numerous papers in highly cited journals and conferences. He is currently doing editorial/reviewing works for various highly reputed SCI/SCIE and Scopus indexed journals.

Author
Bhawna Goyal is currently working as an Assistant Professor in the Department of Electronics and Communication Engineering at Chandigarh University, India. She received her bachelor’s degree in Electronics and Communication Engineering from Guru Nanak Dev University, Amritsar, India and master’s degree in Electronics and Communication Engineering from PEC University of Technology, Chandigarh, India. She obtained her PhD in Electronics and Communication Engineering from Punjab University, India. She has published numerous papers in highly reputed journals. She has performed reviewer works for pioneering journals like IEEE Access, Measurement and Journal of computer science. Her research areas include biomedical signal and image processing and computer vision.

Author
Dr. Jasgurpreet Singh Chohan is currently working as an Associate Professor in Department of Mechanical Engineering at Chandigarh University, India since October 2017. Dr. Chohan completed his doctoral research in improvement of surface characteristics of hip implants fabricated through 3d printing process. He has more than 12 years of experience in research and teaching at graduate and post graduate level. He has supervised one Phd Scholar, eight post graduate dissertations and currently guiding four PhD research scholars at Chandigarh University. His areas of specialization are Digital Manufacturing, Hybrid Machining, Biomedical implants, Advanced Composites, Metamaterials, Human Factor Engineering and Multi-criteria Decision Making. Dr. Chohan received prestigious CII Milca Award in 2020 for performing outstanding research and innovations in Additive Manufacturing. He has filed 6 patents, authored 4 books, 12 book chapters and published more than 70 articles in international Journals and Conferences.

Author
Deepika Koundal is currently associated with University of Petroleum and Energy Studies, Dehradun. She received the recognition and honorary membership from Neutrosophic Science Association from University of Mexico, USA. She is also selected as a Young scientist in 6th BRICS Conclave in 2021. She received the Master and Ph.D. degrees in computer science & engineering from the Panjab University, Chandigarh in 2015. She received the B. Tech. degree in computer science & engineering from Kurkushetra University, India. She is the awardee of research excellence award given by Chitkara University in 2019. She has published more than 40 research articles in reputed SCI and Scopus indexed journals, conferences and two books. She is currently a guest editor in Computers & Electrical Engineering, Internet of Things Journals and IEEE Transaction of Industrial Informatics, Computational and Mathematical Methods in Medicine. She is also serving as Associate Editor in IET Image Processing and International Journal of Computer Applications. She also has served on many technical program committees as well as organizing committees and invited to give guest lectures and tutorials in Faculty development programs, international conferences and summer schools. Her Areas of Interest are Artificial Intelligence, Biomedical Imaging and Signals, Image Processing, Soft Computing, Machine Learning/ Deep Learning. She has also served as reviewer in many repudiated journals of IEEE, Springer, Elsevier, IET, Hindawi, Wiley and Sage.

Author
ATEF ZAGUIA received the bachelor’s degree in computer engineering from the University of Ottawa, and the M.S. and Ph.D. degrees in computer science from the École de Téchnologie Supérieure (E.T.S.), University of Quebec, Montreal, Canada. For one year, he held a postdoctoral position at E.T.S., University of Quebec. He was working on developing an application for a newborn cry-based diagnosis system with the integration of interaction context, supported by the Bill and Melinda Gates Foundation. He is currently an Associate Professor with the College of Computers and Information Technology, Taif University, Saudi Arabia. He has published papers in national and international conferences and journals. His research interests include multimodal systems, pervasive and ubiquitous computing, IoT, AI and context-aware systems. He was a Program Committee Member of many conferences

Author
Habib Hamam (Senior Member, IEEE) received the B.Eng. and M.Sc. degrees in information processing from the Technical University of Munich, Germany, in 1988 and 1992, respectively, the Ph.D. degree in physics and applications in telecommunications from Université de Rennes I conjointly with France Telecom Graduate School, France, in 1995, and the Postdoctoral Diploma degree in accreditation to supervise research in signal processing and telecommunications from Université de Rennes I, in 2004. From 2006 to 2016, he was a Canada Research Chair holder in “Optics in Information and Communication Technologies.” He is currently a Full Professor with the Department of Electrical Engineering, Université de Moncton. His research interests include optical telecommunications, wireless communications, diffraction, fiber components, RFID, information processing, data protection, COVID-19, and deep learning. He is an OSA Senior Member. He is a Registered Professional Engineer in New-Brunswick. He is the Editor-in-Chief of CIT-Review and an Associate Editor of the IEEE Canadian Review


References

[1] F. S. Ahmad, L. Ali, H. A. Khattak, T. Hameed, I. Wajahat, S. Kadry, and S. A. C. Bukhari, “A hybrid machine learning framework to predict mortality in paralytic ileus patients using electronic health records (EHRs),” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 3, pp. 3283-3293, 2021.
[2] L. Ali, I. Wajahat, N. Amiri Golilarz, F. Keshtkar, and S. A. C. Bukhari, “LDA-GA-SVM: improved hepatocellular carcinoma prediction through dimensionality reduction and genetically optimized support vector machine,” Neural Computing and Applications, vol. 33, no. 7, pp. 2783-2792, 2021.
[3] M. A. Khan, I. Ashraf, M. Alhaisoni, R. Damasevicius, R. Scherer, A. Rehman, and S. A. C. Bukhari, “Multimodal brain tumor classification using deep learning and robust feature selection: a machine learning application for radiologists,” Diagnostics, vol. 10, no. 8, article no. 565, 2020. https://doi.org/10.3390/diagnostics10080565
[4] S. Kadry, Y. Nam, H. T. Rauf, V. Rajinikanth, and I. A. Lawal, “Automated detection of brain abnormality using deep-learning-scheme: a study,” in Proceedings of 2021 7th International Conference On Bio Signals, Images, And Instrumentation (ICBSII), Chennai, India, 2021, pp. 1-5.
[5] T. Meraj, H. T. Rauf, S. Zahoor, A. Hassan, M. I. Lali, L. Ali, S. A. C. Bukhari, and U. Shoaib, “Lung nodules detection using semantic segmentation and classification with optimal features,” Neural Computing and Applications, vol. 33, pp. 10737-10750, 2021.
[6] V. Rajinikanth, S. Kadry, R. Damasevicius, D. Taniar, and H. T. Rauf, “Machine-learning-scheme to detect choroidal-neovascularization in retinal OCT image,” in Proceedings of 2021 7th International conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 2021, pp. 1-5.
[7] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: a survey of the state of the art,” Information Fusion, vol. 33, pp. 100-112, 2017.
[8] B. Meher, S. Agrawal, R. Panda, and A. Abraham, “A survey on region based image fusion methods,” Information Fusion, vol. 48, pp. 119-132, 2019.
[9] H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: a survey and perspective,” Information Fusion, vol. 76, pp. 323-336, 2021.
[10] Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus image fusion: a survey of the state of the art,” Information Fusion, vol. 64, pp. 71-91, 2020.
[11] B. Li, Y. Xian, D. Zhang, J. Su, X. Hu, and W. Guo, “Multi-sensor image fusion: a survey of the state of the art,” Journal of Computer and Communications, vol. 9, no. 6, pp. 73-108, 2021.
[12] J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: a survey,” Information Fusion, vol. 45, pp. 153-178, 2019.
[13] N. Tawfik, H. A. Elnemr, M. Fakhr, M. I. Dessouky, A. El-Samie, and E. Fathi, “Survey study of multimodality medical image fusion methods,” Multimedia Tools and Applications, vol. 80, no. 4, pp. 6369-6396, 2021.
[14] J. Agarwal and S. S. Bedi, “Implementation of hybrid image fusion technique for feature enhancement in medical diagnosis,” Human-centric Computing and Information Sciences, vol. 5, article no. 3, 2015. https://doi.org/10.1186/s13673-014-0020-z
[15] D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sensors Journal, vol. 16, no. 1, pp. 203-209, 2015.
[16] S. Aymaz and C. Kose, “Multi-focus image fusion using stationary wavelet transform (SWT) with principal component analysis (PCA),” in Proceedings of 2017 10th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 2017, pp. 1176-1180.
[17] L. Chen, H. C. Chen, Z. Li, and Y. Wu, “A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging,” Human-centric Computing and Information Sciences, vol. 7, article no. 35, 2017. https://doi.org/10.1186/s13673-017-0110-9
[18] J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Physics & Technology, vol. 82, pp. 8-17, 2017.
[19] K. Zhan, Y. Xie, H. Wang, and Y. Min, “Fast filtering image fusion,” Journal of Electronic Imaging, vol. 26, no. 6, article no. 063004, 2017. https://doi.org/10.1117/1.JEI.26.6.063004
[20] Y. Liu, X. Chen, J. Cheng, and H. Peng, “A medical image fusion method based on convolutional neural networks,” in Proceedings of 2017 20th International Conference on Information Fusion (Fusion), Xi'an, China, 2017, pp. 1-7.
[21] H. F. Nweke, Y. W. Teh, G. Mujtaba, U. R. Alo, and M. A. Al-garadi, “Multi-sensor fusion based on multiple classifier systems for human activity identification,” Human-centric Computing and Information Sciences, vol. 9, article no. 34, 2019. https://doi.org/10.1186/s13673-019-0194-5
[22] D. P. Bavirisetti, G. Xiao, J. Zhao, R. Dhuli, and G. Liu, “Multi-scale guided image and video fusion: a fast and efficient approach,” Circuits, Systems, and Signal Processing, vol. 38, no. 12, pp. 5576-5605, 2019.
[23] F. Lahoud and S. Susstrunk, “Zero-learning fast medical image fusion,” in Proceedings of 2019 22th International Conference on Information Fusion (FUSION), Ottawa, Canada, 2019, pp. 1-8.
[24] L. Lu, X. Ren, K. H. Yeh, Z. Tan, and J. Chanussot, “Exploring coupled images fusion based on joint tensor decomposition,” Human-centric Computing and Information Sciences, vol. 10, article no. 10, 2020. https://doi.org/10.1186/s13673-020-00215-z
[25] X. Li, X. Guo, P. Han, X. Wang, H. Li, and T. Luo, “Laplacian redecomposition for multimodal medical image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 9, pp. 6880-6890, 2020.
[26] S. Polinati and R. Dhuli, “Multimodal medical image fusion using empirical wavelet decomposition and local energy maxima,” Optik, vol. 205, article no. 163947, 2020. https://doi.org/10.1016/j.ijleo.2019.163947
[27] V. Subbiah Parvathy, S. Pothiraj, and J. Sampson, “A novel approach in multimodality medical image fusion using optimal shearlet and deep learning,” International Journal of Imaging Systems and Technology, vol. 30, no. 4, pp. 847-859, 2020.
[28] D. C. Lepcha, B. Goyal, and A. Dogra, “Image fusion based on cross bilateral and rolling guidance filter through weight normalization,” The Open Neuroimaging Journal, vol. 13, pp. 51-61, 2020. https://doi.org/10.2174/1874440002013010051
[29] J. Jose, N. Gautam, M. Tiwari, T. Tiwari, A. Suresh, V. Sundararaj, and M. R. Rejeesh, “An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion,” Biomedical Signal Processing and Control, vol. 66, article no. 102480, 2021. https://doi.org/10.1016/j.bspc.2021.102480
[30] B. Goyal, D. C. Lepcha, A. Dogra, V. Bhateja, and A. Lay-Ekuakille, “Measurement and analysis of multi-modal image fusion metrics based on structure awareness using domain transform filtering,” Measurement, vol. 182, article no. 109663, 2021. https://doi.org/10.1016/j.measurement.2021.109663
[31] M. Kaur and D. Singh, “Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 2, pp. 2483-2493, 2021.
[32] W. Li, Q. Lin, K. Wang, and K. Cai, “Improving medical image fusion method using fuzzy entropy and nonsubsampling contourlet transform,” International Journal of Imaging Systems and Technology, vol. 31, no. 1, pp. 204-214, 2021.
[33] Q. Hu, S. Hu, and F. Zhang, “Multi-modality image fusion combining sparse representation with guidance filtering,” Soft Computing, vol. 25, no. 6, pp. 4393-4407, 2021.
[34] J. Chen, L. Zhang, L. Lu, Q. Li, M. Hu, and X. Yang, “A novel medical image fusion method based on rolling guidance filtering,” Internet of Things, vol. 14, article no. 100172, 2021. https://doi.org/10.1016/j.iot.2020.100172
[35] M. Iwanowski, “Edge-aware color image manipulation by combination of low-pass linear filter and morphological processing of its residuals,” in Computer Vision and Graphics. Cham, Switzerland: Springer, 2020, pp. 59-71.
[36] P. Soille, Morphological Image Analysis: Principles and Applications. Heidelberg, Germany: Springer, 2013.
[37] B. K. Shreyamsha Kumar, “Image fusion based on pixel significance using cross bilateral filter,” Signal, Image and Video Processing, vol. 9, no. 5, pp. 1193-1204, 2015.
[38] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the 6th International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 1998, pp. 839-846.
[39] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 664-672, 2004.
[40] P. Shah, S. N. Merchant, and U. B. Desai, “An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood,” in Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, 2011, pp. 1-7.
[41] S. J. Devlin, R. Gnanadesikan, and J. R. Kettenring, “Robust estimation and outlier detection with correlation coefficients,” Biometrika, vol. 62, no. 3, pp. 531-545, 1975.
[42] Clinical examination resources [Online]. Available: https://litfl.com/clinical-examination-database/.
[43] K. A. Johnson and J. Alex Becker, The whole brain atlas,” [Online]. Available: https://www.med.harvard.edu/aanlib/.

About this article
Cite this article

Dawa Chyophel Lepcha1, Ayush Dogra2, Bhawna Goyal1,*, Jasgurpreet Singh Chohan3, Deepika Koundal4, Atef Zaguia5, and Habib Hamam6,7,8, Multimodal Medical Image Fusion Based on Pixel Significance Using Anisotropic Diffusion and Cross Bilateral Filter, Article number: 12:15 (2022) Cite this article 1 Accesses

Download citation
  • Recived4 October 2021
  • Accepted8 January 2022
  • Published30 March 2022
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords