홈으로ArticlesAll Issue
ArticlesQuantum Mutation Reptile Search Algorithm for Global Optimization and Data Clustering
  • Rolla Almodfer1 , Mohammed Mudhsh1 , Samia Chelloug2 , Mohammad Shehab3 , Laith Abualigah4,5, and Mohamed Abd Elaziz6,7,8

Human-centric Computing and Information Sciences volume 12, Article number: 30 (2022)
Cite this article 1 Accesses
https://doi.org/10.22967/HCIS.2022.12.030

Abstract

Data clustering is one of the challenges of machine learning problems that group a set of data objects into a subset of a predefined number of groups. This paper proposes a new, improved version of the reptile search algorithm (RSA) called quantum mutation reptile search algorithm (QMRSA). The proposed method uses the quantum mutation-based search strategy to enhance the performance of the RSA to solve various optimization problems. The method tackles the main shortcomings raised in the original version of the RSA, like premature convergence and non-equilibrium between the search processes. Experiments are conducted on several benchmark functions and data clustering problems. The results are analyzed and compared with several stateof-the-art methods, including aquila optimizer, grey wolf optimizer, sine cosine algorithm, whale optimization algorithm, dragonfly algorithm, and arithmetic optimization algorithm. The results show the QMRSA’s superiority in dealing with the mathematical benchmark functions and real-world problems like data clustering


Keywords

Quantum Mutation, Reptile Search Algorithm (RSA), Global Optimization, Data Clustering, Algorithm


Introduction

Optimization algorithms have been proposed based on the natural behaviors of various living organisms [1, 2]. In other words, the main features of optimization algorithms are that they are natureinspired and they aim to add new methods and techniques for solving optimization problems in various fields [3]. Particle swarm optimization (PSO) [4], arithmetic optimization algorithm [5], marine predators algorithm [6], red deer algorithm [7], aquila optimizer [8, 9], grey wolf optimizer [10], ant lion optimizer [11], whale optimization algorithm [12], salp swarm algorithm [13], moth flame optimization (MFO) [14] and reptile search algorithm (RSA) [15] are examples of the standard optimization algorithms that belong to the swarm-based algorithms branch. Optimization algorithms are usually inspired by social insect colonies and animal societies [16]. Also, they emulate the behavior of animals looking for food or survival. The main features of these methods are their robustness in achieving the optimal solutions and their flexibility in to be adapted to many problems [17]. RSA is one of the recent meta-heuristics which belongs to population-based algorithms, and it has been proposed by Abualigah et al. [15]. RSA imitates the behavior of crocodiles on two key activities that incorporate encircling and hunting. the former is performed through high walking or belly walking, while the latter is achieved through hunting coordination or hunting cooperation. In contrast, Charles Darwin’s theory of evolution is the only idea for much recent biology which was proposed in [18]. In other words, the theory's selection concept is based on random mutations, where the valuable mutations’ selection aid in determining the path of evolution. However, the mutation's random selection has a weakness in determining the guided mutation called adaptive mutation [19]. Therefore, many research works were performed to enhance the process of selection and mutation. Han and Kim used the advantages of quantum computing to introduce a new version of the evolutionary algorithm, namely a quantum-inspired evolutionary algorithm (QEA) [20]. QEA depends on the theory and mechanism of quantum computing, like matching status and a quantum bit (Q-bit). Taking the advantages of quantum mutation (QM), in the present work, we propose an improvement of RSA named quantum mutation RSA (QMRSA). Shortcomings such as loss of solution’s accuracy, the premature convergence problem, and the balance between exploration and exploitation in the search processes have been solved. Furthermore, QMRSA improves the diversity of the population and the exploration ability of RSA and increases the chance of selecting the best solutions. Evaluating the performance of the QMRSA will be performed by comparing it with other published methods in the literature using a set of benchmark functions and data clustering problems. The main contribution of this study can be summarized as follows:

Proposing an alternative data clustering method that depends on improved RSA performance.

QM is used to enhance the searching ability of RSA to find the optimal solution.

Assess the efficiency of the developed method using a set of global optimization problems.

Evaluate the ability of the developed method as a clustering method using different datasets.

The rest of the paper is organized as follows. First, the proposed method is given in Section 3. The experimental results are shown in Section 4. Finally, in Section 5, the conclusion and future work are presented.


Related Work

This section briefly reviews the existing quantum techniques used to improve meta-heuristics (MH) performance [21, 22]. For example, Kaveh et al. [23] proposed a hybrid algorithm that integrates QEA and colliding bodies for optimization, called QECBO. The main goal of the QECBO is to avoid traps into the local optimum using the principles of ambiguity in classical mechanics. That means that QECBO depends on the behavior of the quantum in the possible delta based on the enhanced Newtonian collision laws. The author evaluated the QECBO’s performance using a set of benchmark functions. The result showed that QECBO achieved high-quality solutions and balanced exploitation and exploration. Moreover, many studies highlighted the use of classical genetic algorithms to develop quantum computing; the summary of these studies has been introduced in [24]. However, the purpose was to rely on classical computers. Therefore, it has become necessary to focus on the significance of improving the classical mechanisms. Malossini et al. [25] proposed an algorithm called quantum genetic optimization algorithm (QGOA). The authors took advantage of the quantum method (i.e., its ability to evaluate the solutions’ fitness and the selection processes). Also, they improved the efficiency of QGOA by conducting several comparisons based on the computational complexity. The experiments showed that QGOA outperformed classical genetic algorithms. Liu et al. [26] used the advantages of the mutation technique to propose quantum-behaved particle swarm optimization, called QPSO. The proposed algorithm aims to enhance the weak global search mechanism of the original PSO, which avoids trapping in the local minima. The authors employed a set of benchmark functions to evaluate the performance of QPSO. The result illustrated that the QPSO outperformed basic PSO. In [27], the authors enhanced the stability of the backtracking search algorithm (BSA) using a set of processes, such as quantum Gaussian mutations and quasi reflection depending on initialization. The proposed algorithm called gQR-BSA aims to modify the arranged frame of the basic BSA using the quantum Gaussian mutation to balance exploitation and exploration in the search process and determine the optimal solutions. Thus, gQR-BSA is able to solve various difficult problems. A set of benchmark functions and engineering optimization problems have been used to evaluate the efficiency of gQR-BSA. The experiment results showed that gQR-BSA outperformed the basic BSA and other its variants. Jiao et al. [28] integrated the basics and technique of quantum computing to enhance the immune clonal algorithm’s convergence rate, called QICA. In the proposed algorithm, the antibody is produced and split into groups of sub-populations that were represented as gene quantum bits. The turnover gate mechanism of quantum mutation is used to increase the convergence rate. A practical problem and a set of benchmark functions have been utilized to demonstrate the efficiency of QICA in terms of the search mechanism. The results showed that QICA achieved better performance than other enhanced genetic algorithms in the literature. The enhancement of quantum genetic algorithm has been introduced in [29] to increase the chance of selecting the best solution in the search space. The strategy of the proposed algorithm is based on quantum mutation and catastrophe processes with locating the self-adaptive and rotating angle. The authors used function optimization problems to evaluate the efficiency and ability of the proposed algorithm in the search space. Xu et al. [30] introduced a mutation acceleration coefficient of quantum with PSO algorithm for determining the hyperspectral endmember, namely MOAQPSO. The main contribution of the MOAQPSO is improving the search mechanism of the basic PSO by avoiding being stuck in the local optimum. Also, to increase the diversity of the solutions in the search space. Real hyperspectral and synthetic datasets were used to evaluate the performance of the MOAQPSO. The results illustrated that the MOAQPSO outperformed the other state-of-the-art algorithms. In [31], a quantum crow search optimization algorithm was proposed and applied to solve the image segmentation problem. In this technique, the initial population is generated using the quantum state concept. Then, the rotation operation is used to update the current population. This algorithm has been compared with other cluster-based image segmentation techniques, and it has demonstrated its performance compared to the benchmark techniques.


Preliminaries

In this section, the formulation of clustering problem, RSA and QM technique are introduced.

Clustering Problem Formulation

Within this section, we introduce the basic formulation of data clustering. Considering the dataset 𝑋 which contains n objects, we need to divide it into a set of 𝐾 clusters. This can be achieved by determining the centroid of each cluster subject to C = {C1, C2, … $C_K$} and CK $\ne $ Φ. In addition, the following equation is used as a fitness function to assess the process of clustering the objects.

(1)



where 𝐷𝑠 stands for the Euclidean distance value between the object 𝑋𝑖 and the center 𝑐𝑗 ; and this distance is defined as follows:

(2)



Basic of Quantum Representation

The Q-bit representation is considered as an alternative to numeric or binary representation that is distinguished by its small size (i.e., smallest unity of data). The matching of two statuses at the same time can be notated as:

(3)


where both a and b refer to the coefficients of the two states. |a|2and |b|2are the two possible outcomes of Q-bits that are usually taken to have the value "0" and "1". So, the summation of the two states equals 1, as shown in the following equation.

(4)


The Q-gate concept is also proposed in QEA to guide individuals to find the optimal solutions. Equation (5) illustrates the rotation of Q-gate in a 2 × 2 matrix. QEA proved its efficiency in solving combinatorial optimization problems such as the knapsack problem [32, 33].

(5)


The upgraded procedure is performed as follows:

(6)


where refer to the state of the Q-bit vector prior and posterior the rotation Q-gate upgrading of the i-th Q-bit of the chromosome [20].

The Reptile Search Algorithm

RSA [15] is a new meta-heuristic that is population-based algorithm. It employs crocodile personal and global best experiences to guide the search process. Local and global searches are explained in detail in the following sub-sections.

2.3.1 Global search
In this part, the exploratory operation of the original RSA is presented. According to four scenarios, the RSA will switch from global search to local search: split the maximum number of generations into four parts. The RSA exploration (global) phase investigates the search areas to discover a more robust agent using two primary search techniques. This aspect of searching is subjected to two criteria. The long walking plan is adapted by t ≤ 2$\frac{T}{4}$, otherwise, the agents are adapted by t ≤ 2$\frac{T}{4}$ and T > $\frac{T}{4}$. The position updating for the exploration stage is given in Equation (7).

(7)


where BES$T_j$(t) is the obtained position from the best solution so far. $X_{(r1,j)}$ is a random value of the $i_{th}$ agent. rand is a random value and $r_{B}$ = 0.1 as in [15]. 𝑡 denotes the current generation and 𝑇 denotes to the maximum generations. N denotes the total candidate solutions. The hunting operator $n_{i,j}$ which computed using Equation (8).

(8)


In Equation (7), $R_{i,j}$ and ES(T) are adjusting exploration parameters computed as Equations (9) and (10), respectively.

(9)


(10)


where, $\in $ a small value. $r_3$ $\in $ [−1,1] is a random integer value. $P_{i,j}$ stands for computed using Equation(11).

(11)


where 𝑀($x_i$) is the mean value of the current positions computed by Equation (12). U$B_j$ and L$B_j$ are the upper and lower boundaries, respectively. $r_a$ = 0.1 as in [15].

(12)



2.3.1 Local search In this section, the exploitation operation of the original RSA is presented. The agent in this stage is adopted using the hunting coordination plan by t $\le$ $\frac{T}{4}$ and t>2$\frac{T}{4}$. In contrast case, the hunting cooperation plan is executed, if t $\le$ T and t > $\frac{T}{4}$. The position updating for the exploitation stage isgiven in Equation (13).

(13)


The main process of the original RSA is shown in Fig. 1.
Fig. 1. Flowchart of the original RSA.


Quantum Mutation Technique

The QM technique is presented in this section. In [34], quantum behaving PSO methods is introduced based on the Gaussian probability distribution (GPD) mutation director motivated by the conventional PSO approach and quantum technicians’ assumptions. The QM utilized the GPD and the best-obtained solution [35] and the Monte Carlo method. The mutation operator is implemented using the following equations.

(14)


where $r_B$ is a control parameter identified as a reduction development. 𝑘 and 𝑢 are uniform distribution values in [0,1]. The p parameter value is computed by Equation (15)

(15)


where r1 and r2 are random numbers. g1 $\in$ abs(GD(0,1)). GD is the random value.


The Proposed Quantum Mutation Reptile Search Algorithm

We discuss in this section the developed QMRSA.

Initialization Phase

In the proposed QMRSA, the optimization rules begin with a collection of competitor solutions (X) as given in Equation (16), made randomly.

(16)


where N is the total agents and n stands for its dimension. $X_{i,j}$ is the 𝑗th position of the 𝑖th solution generated using Equation (17).

(17)


The Detailed Process of the QMRSA

The details of the proposed method are presented in this section. Our primary motivation is to get better clustering results and to avoid the weaknesses raised by the initial processes, such as local search problems, convergence limitations and the balancing problems during the search. In the proposed QMRSA, the optimization rule starts by creating a random collection of solutions. Then, throughout the trajectory of renewal, the search rules of QMRSA investigate the potential agents of the current-optimal agent. Finally, each agent succeeds in its position. The search methods of the QMRSA are classified into two main approaches, RSA and QM. The RSA’s search approach is classified into global and local search methods. Each method has two search approaches: (1) global (high and walking strategies) and (2) local (hunting coordination and cooperation). Agents seek to investigate the search space in case of t $\le$ $\frac{T}{2}$and seek to determine the near-optimal agent if t>$\frac{T}{2}$. For the first part, if rand < 0.5, the search process of the RSA will be executed; otherwise, the search process of the QM will be performed. In the exploration stage of the original RSA, the first search approach from the global search methods is executed when t$\le$ $\frac{T}{4}$, and the second search approach from the global search methods is executed when t $\le$ 2$\frac{T}{4}$ and t>$\frac{T}{4}$.In the exploitation stage of the original RSA, the first search approach from the local search methods is executed when t $\le$3 $\frac{T}{4}$ and t> $\frac{2}{4}$, otherwise, the second search approach from the local search methods is executed, when t $\le$ T and t > 3$\frac{T}{4}$. Eventually, the QMRSA is terminated when it meets the stop criterion. The steps of the suggested QMRSA are shown in Fig. 2.

Fig. 2. Flowchart of the QMRSA.



we assess the performance of the developed QMRSA method by using a set of experiment series. In addition, we compared the results of QMRSA with a set of MH techniques, including AO, GWO, SCA, WOA, DA, FDA, AOA, RSA, and QMRSA. The parameter of each algorithm is set according to the original implementation, as in Table 1.
Experiment 1: Classical Optimization Functions

In this part, a set of different classical optimization problems is used to assess the quality of QMRSA. This set includes seven unimodal (F1–F7), five multimodal (F8–F13), and 10 fixed-dimension multimodal functions (i.e., F14–F23). The unimodal functions are used to test the proposed method’s searchability. Because there are no local optima in this class, the accuracy and convergence acceleration of the QMRSA is the primary criterion (i.e., unimodal function; F1–F7). In contrast to this class, the QMRSA’s exploration search capacity is tested using the other two classes: multimodal (F8–F13) and fixed-dimension multimodal functions (F14–F23), with several locally optimum solutions. For all of the offered classic benchmark functions, see [36].

Table 1.Parameters values of the tested algorithms
No. Algorithm Parameter Value
1 Aquila optimizer (AO) α 0.1
δ 0.1
2 Grey wolf optimizer (GWO) Convergence parameter (α) Linear reduction from 2 to 0
3 Sine cosine algorithm (SCA) α 0.05
4 Whale optimization algorithm (WOA) α Decreased from 2 to 0
5 Dragonfly algorithm (DA) w 0.2–0.9
s, a, and c 0.1
f and e 1
6 Flow direction algorithm (FDA) β 3
7 Arithmetic optimization algorithm (AOA) α 5
μ 0.5
8 Reptile search algorithm (RSA) α 0.1
β 0.1
9 Quantum mutation RSA (QMRSA) α 0.1
β 0.1
β2 0.2


Fig. 3. Qualitative analysis of F1, F2, and F13.


4.1.1 Qualitative analysis of the classical functions
This section analyzes the set of functions (F1, F3, and F13) qualitatively. The true search areas of the supplied benchmark functions are shown in the first column in Fig. 3, which have a different number of local optima of diverse forms. Any optimization strategy should typically achieve equilibrium among the search strategies (i.e., exploitation and exploration) to identify global optima. As a result, this set of tasks can be used to evaluate both exploration and exploitation. In the second column, the changes in the first dimension of Best within the searching process are clear. This indication can be used to determine whether best has a significant influence on the initial process, as well as gradual alterations in the subsequent processes. According to [37], the proposed QMRSA’s behavior can demonstrate that a population-based strategy converges to an area and searches locally within the specified search zone. Furthermore, the motions decrease throughout several initiatives; this characteristic promotes the transition between exploration and exploitation. Finally, the solution’s growth occurs in stages, leading to the exploitation search. The average fitness value of all employed agents overall initiatives is the second qualitative statistic shown in the third column. If an optimization strategy enhances the quality of its agents, the average fitness value should undoubtedly be obtained over all of the initiatives. The proposed QMRSA approach delivers the best fitness values for practically all investigated problems, as indicated by the average fitness value in Fig. 3 (third column) for the proposed method. Another point worth noticing is the increased decline in average fitness values, which shows that current solutions progress more quickly and consistently among endeavors. The fourth column contains the third qualitative metric, which depicts the optimal solution’s convergence speed across all initiatives. The convergence trajectory in Fig. 3 represents the fitness value of the best answer in each effort (fourth column). The convergence of the applied strategy can be seen in the reduction of fitness values across the many undertakings (i.e., the conventional RSA). It is also clear that the proposed QMRSA’s convergence speed is degraded more than others, as evidenced by the displayed convergence curve and the support mentioned above. Furthermore, the proposed method (QMRSA) produced the best results for various functions in a short amount of time. This figure shows the algorithm behavior and convergence analysis during the optimization process.

4.1.2 Comparative with other MH techniques
The quality of the developed QMRSA approach is evaluated using traditional 23 benchmark functions in this section of the results. Comparable techniques’ results are provided in Tables 2–5, and the proposed QMRSA approach demonstrates promising outcomes compared to other MH techniques. The average of the QMRSA is the least after researching and analyzing the acquired results. This indicates that combining the quantum operators improves the traditional RSA’s ability. The proposed approach (QMRSA) has the best-obtained average and STD values. In addition, it can be seen that the presented QMRSA outperforms all other well-known methods at various problems, including F1, F2, F3, F4, F6, F13, F14, and F22 (i.e., AO, GWO, SCA, WOA, DA, FDA, AOA, and RSA). Moreover, it was confirmed that the presented QMRSA’s exploration and exploitation search abilities were significantly improved by quantum mutation compared to other approaches, including the standard RSA. The proposed QMRSA approach is nearly the best over-tested approach for unimodal, multimodal, and fixed-dimensional multimodal functions. In other cases, the presented QMRSA’s agents in F7, F8, F16, F19, F20, and F23 are competitive with different approaches. In addition, the p-value refers to a statistical value obtained by using the Wilcoxon rank test that compares the proposed QMRSA method and other tested methods. Based on P-value obtained using the Wilcoxon rank test as in Tables 2–5, the developed technique's status compared to the other tested methods is represented p-value less than 0.05, indicating that the suggested approach does not significantly outperform it. Furthermore, if p-value ≥0.05, the QMRSA outperforms the other methods significantly. For example, AO, GWO, SCA, WOA, DA, and FDA were outperformed considerably by the suggested QMRSA in solving the F1. AO, GWO, WOA, DA, AOA, and RSA were significantly outperformed by the suggested QMRSA in solving the F14. For further analysis, the Friedman test is applied to compute the mean rank of each method. From the results represented in the last row of Tables 2–5, it can be noticed that the QMRSA is ranked as the first ranking in obtaining the best value among the tested problems. Followed by AO, AOA, RSA, GWO, WOA, FDA, SCA, and DA, respectively, at functions F1–F13. Whereas for the F14–F23, the GWO (second ranking), WOA (third ranking), AO (fourth ranking), DA (fifth ranking), FDA (sixth ranking), SCA (seventh ranking), RSA (eighth ranking), and AOA (ninth ranking).

Table 2.Comparative results of QMRSA and other methods for F1–F6
Fun Measure Comparative algorithms
AO GWO SCA WOA DA FDA AOA RSA QMRSA
F1  Best  5.89E-120 5.19E-47 1.41E-09 3.31E-69 2.15E+01 3.23E-11 0.00E+00 0.00E+00 0.00E+00
Average  1.47E-120 1.66E-47 3.98E-10 8.28E-70 7.61E+00 1.53E-11 0.00E+00 0.00E+00 0.00E+00
 Worst  7.14E-150 2.72E-49 2.70E-14 6.43E-78 1.44E-01 6.20E-13 0.00E+00 0.00E+00 0.00E+00
 STD  2.95E-120 2.43E-47 6.81E-10 1.66E-69 9.96E+00 1.59E-11 0.00E+00 0.00E+00 0.00E+00
 p-value  3.56E-01 2.20E-01 2.86E-01 3.56E-01 1.77E-01 1.03E-01 NaN NaN NaN
 Rank  4 6 8 5 9 7 1 1 1
F2  Best  4.22E-58 7.88E-27 5.31E-09 7.34E-49 1.95E+00 1.27E-08 0.00E+00 0.00E+00 0.00E+00
Average  1.11E-58 2.65E-27 2.24E-09 1.84E-49 1.70E+00 5.55E-09 0.00E+00 0.00E+00 0.00E+00
 Worst  3.07E-72 1.59E-28 6.09E-11 1.03E-53 1.52E+00 1.37E-09 0.00E+00 0.00E+00 0.00E+00
 STD  2.07E-58 3.55E-27 2.54E-09 3.67E-49 1.81E-01 4.95E-09 0.00E+00 0.00E+00 0.00E+00
 p-value  3.26E-01 1.87E-01 1.28E-01 3.55E-01 1.47E-06 6.62E-02 NaN NaN NaN
 Rank  4 6 7 5 9 8 1 1 1
F3  Best  1.31E-110 1.05E-18 1.86E+00 9.42E+02 5.15E+02 2.38E-02 0.00E+00 0.00E+00 0.00E+00
Average  3.29E-111 2.85E-19 4.72E-01 4.41E+02 2.12E+02 8.57E-03 0.00E+00 0.00E+00 0.00E+00
 Worst  2.69E-153 1.50E-24 1.48E-04 8.99E+01 2.77E+00 3.31E-03 0.00E+00 0.00E+00 0.00E+00
 STD  6.57E-111 5.12E-19 9.28E-01 4.14E+02 2.16E+02 1.02E-02 0.00E+00 0.00E+00 0.00E+00
 p-value  3.56E-01 3.09E-01 3.48E-01 7.70E-02 9.74E-02 1.44E-01 NaN NaN NaN
 Rank  4 5 7 9 8 6 1 1 1
F4  Best  5.73E-71 1.30E-14 2.13E-02 4.86E+01 4.37E+00 1.69E+00 2.63E-119 0.00E+00 0.00E+00
Average  1.46E-71 6.37E-15 5.81E-03 2.32E+01 2.83E+00 1.18E+00 6.57E-120 0.00E+00 0.00E+00
 Worst  2.28E-78 1.95E-15 5.75E-04 4.40E-01 1.51E+00 5.94E-01 0.00E+00 0.00E+00 0.00E+00
 STD  2.85E-71 5.15E-15 1.03E-02 2.11E+01 1.19E+00 4.62E-01 1.31E-119 0.00E+00 0.00E+00
 p-value  3.45E-01 4.82E-02 3.04E-01 7.05E-02 3.09E-03 2.25E-03 3.56E-01 NaN NaN
 Rank  4 5 6 9 8 7 3 1 1
F5  Best  1.61E-03 8.05E+00 8.08E+00 8.93E+00 1.88E+03 6.45E+00 7.42E+00 8.61E+00 3.36E-29
Average  8.82E-04 6.83E+00 7.54E+00 7.74E+00 5.69E+02 5.64E+00 7.23E+00 8.17E+00 2.80E-29
 Worst  3.13E-04 5.86E+00 7.28E+00 7.20E+00 2.20E+01 5.13E+00 6.95E+00 7.38E+00 1.87E-29
 STD  5.38E-04 9.88E-01 3.71E-01 8.10E-01 8.83E+02 6.35E-01 2.02E-01 5.43E-01 6.76E-30
 p-value  8.90E-08 5.42E-02 1.00E-01 4.04E-01 2.51E-01 9.13E-04 1.75E-02 8.89E-08 NaN
 Rank  2 4 6 7 9 3 5 8 1
F6  Best  1.41E-04 5.77E-06 5.67E-01 1.23E-01 3.01E+00 1.58E+00 8.71E-02 2.25E+00 7.64E-11
Average  8.03E-05 4.63E-06 4.19E-01 3.84E-02 2.09E+00 1.42E+00 4.51E-02 2.13E+00 2.29E-11
 Worst  2.44E-05 2.59E-06 2.58E-01 6.35E-03 1.03E-02 1.33E+00 7.21E-03 1.93E+00 6.09E-13
 STD  6.07E-05 1.40E-06 1.36E-01 5.63E-02 1.42E+00 1.11E-01 3.54E-02 1.56E-01 3.60E-11
 p-value  2.39E-07 2.39E-07 2.80E-05 5.56E-07 3.82E-01 2.39E-07 3.86E-07 3.11E-04 NaN
 Rank  3 2 6 4 8 7 5 9 1


Table 3.Comparative results of QMRSA and other methods for F7–F13
Fun Measure Comparative algorithms
AO GWO SCA WOA DA FDA AOA RSA QMRSA
F7   Best  1.43E-03 1.14E-03 3.08E-03 1.91E-03 3.84E-02 2.18E-02 1.99E-04 2.86E-04 2.26E-04
Average  5.36E-04 7.39E-04 2.20E-03 1.09E-03 2.29E-02 8.78E-03 1.46E-04 2.00E-04 1.28E-04
 Worst  9.87E-05 4.25E-04 1.33E-03 2.38E-04 9.03E-03 4.07E-03 2.97E-05 7.95E-05 1.43E-05
 STD  6.04E-04 3.16E-04 9.99E-04 6.99E-04 1.31E-02 8.68E-03 7.94E-05 9.37E-05 8.77E-05
 p-value  3.13E-01 1.70E-02 7.26E-03 4.42E-02 1.34E-02 9.52E-02 4.19E-01 3.04E-01 NaN
 Rank  4 5 7 6 9 8 2 3 1
F8   Best  -1.92E+03 -2.14E+03 -1.75E+03 -2.81E+03 -2.38E+03 -2.76E+03 -2.41E+03 -1.91E+03 -1.81E+03
Average  -2.47E+03 -2.69E+03 -2.02E+03 -3.55E+03 -2.76E+03 -3.08E+03 -2.71E+03 -1.94E+03 -2.04E+03
 Worst  -2.95E+03 -3.00E+03 -2.20E+03 -4.19E+03 -3.11E+03 -3.36E+03 -3.12E+03 -1.99E+03 -2.23E+03
 STD  4.23E+02 3.80E+02 1.95E+02 7.43E+02 2.98E+02 2.51E+02 3.08E+02 3.55E+01 1.95E+02
 p-value  1.13E-01 2.31E-02 9.02E-01 7.76E-03 6.75E-03 5.96E-04 1.07E-02 3.28E-01 1.00E+00
 Rank  6 5 8 1 3 2 4 9 7
F9   Best  0.00E+00 9.95E+00 4.15E+00 2.58E+01 1.32E+01 1.59E+01 0.00E+00 0.00E+00 0.00E+00
Average  0.00E+00 2.49E+00 1.04E+00 6.45E+00 1.06E+01 9.46E+00 0.00E+00 0.00E+00 0.00E+00
 Worst  0.00E+00 0.00E+00 5.97E-13 0.00E+00 8.80E+00 3.98E+00 0.00E+00 0.00E+00 0.00E+00
 STD  0.00E+00 4.98E+00 2.07E+00 1.29E+01 1.96E+00 5.18E+00 0.00E+00 0.00E+00 0.00E+00
 p-value  NaN 3.56E-01 3.54E-01 3.56E-01 3.74E-05 1.07E-02 NaN NaN NaN
 Rank  1 6 5 7 9 8 1 1 1
F10   Best  8.88E-16 1.51E-14 5.11E-06 4.44E-15 1.99E+01 2.01E+00 8.88E-16 8.88E-16 8.88E-16
Average  8.88E-16 9.77E-15 1.87E-06 1.78E-15 8.15E+00 1.33E+00 8.88E-16 8.88E-16 8.88E-16
 Worst  8.88E-16 7.99E-15 2.36E-07 8.88E-16 2.10E+00 2.60E-06 8.88E-16 8.88E-16 8.88E-16
 STD  0.00E+00 3.55E-15 2.22E-06 1.78E-15 7.99E+00 9.01E-01 0.00E+00 0.00E+00 0.00E+00
 p-value  NaN 2.45E-03 1.44E-01 3.56E-01 8.77E-02 2.58E-02 NaN NaN NaN
 Rank  1 6 7 5 9 8 1 1 1
F11   Best  0.00E+00 1.69E-02 6.86E-01 3.99E-01 1.58E+00 4.92E-01 0.00E+00 0.00E+00 0.00E+00
Average  0.00E+00 7.55E-03 4.39E-01 1.40E-01 8.93E-01 3.17E-01 0.00E+00 0.00E+00 0.00E+00
 Worst  0.00E+00 0.00E+00 1.42E-04 0.00E+00 5.17E-01 1.87E-01 0.00E+00 0.00E+00 0.00E+00
 STD  0.00E+00 8.84E-03 3.15E-01 1.89E-01 4.74E-01 1.40E-01 0.00E+00 0.00E+00 0.00E+00
 p-value  NaN 1.38E-01 3.20E-02 1.90E-01 9.33E-03 4.06E-03 NaN NaN NaN
 Rank  1 5 8 6 9 7 1 1 1
F12   Best  2.01E-05 3.97E-02 1.55E-01 1.44E-01 2.30E+00 1.24E+00 3.89E-01 7.23E-01 6.09E-02
Average  5.33E-06 1.49E-02 1.28E-01 5.09E-02 1.15E+00 6.22E-01 2.78E-01 6.07E-01 5.63E-02
 Worst  7.60E-08 5.43E-06 9.17E-02 7.58E-03 2.07E-01 9.48E-09 1.97E-01 5.16E-01 5.00E-02
 STD  9.82E-06 1.90E-02 2.71E-02 6.39E-02 8.79E-01 5.68E-01 8.70E-02 9.08E-02 5.49E-03
 p-value  6.88E-04 1.04E-03 1.66E-02 5.60E-03 9.55E-02 2.77E-01 2.24E-03 1.96E-03 NaN
 Rank  1 2 5 3 9 8 6 7 4
F13   Best  1.79E-05 1.02E-01 4.49E-01 1.62E-01 4.00E+00 9.92E-10 8.84E-01 8.27E-28 4.85E-31
Average  9.39E-06 5.02E-02 3.46E-01 8.22E-02 1.95E+00 2.87E-10 7.41E-01 2.93E-28 2.55E-31
 Worst  9.75E-10 2.98E-06 2.18E-01 1.52E-02 7.74E-01 3.02E-11 4.83E-01 4.60E-31 2.46E-32
 STD  7.64E-06 5.80E-02 1.07E-01 6.12E-02 1.41E+00 4.70E-10 1.79E-01 3.89E-28 2.61E-31
 p-value  4.93E-02 1.34E-01 6.59E-04 3.64E-02 3.24E-02 2.68E-01 1.67E-04 1.83E-01 NaN
 Rank  4 5 7 6 9 3 8 2 1
 Mean ranking  3 4.769 6.692 5.615 8.308 6.308 3 3.462 1.692
 Final ranking  2 5 8 6 9 7 2 4 1


Table 4.Comparative results of QMRSA and other methods for F14–F20
Fun Measure Comparative algorithms
AO GWO SCA WOA DA FDA AOA RSA QMRSA
F14  Best  2.98E+00 1.08E+01 1.01E+00 1.08E+01 1.99E+00 6.83E+00 1.27E+01 2.98E+00 9.98E-01
 Average  1.99E+00 4.18E+00 1.00E+00 3.94E+00 1.25E+00 3.75E+00 9.51E+00 2.79E+00 9.98E-01
 Worst  9.98E-01 9.98E-01 9.99E-01 9.98E-01 9.98E-01 2.21E+00 9.98E-01 2.20E+00 9.98E-01
 STD  1.15E+00 4.46E+00 6.64E-03 4.65E+00 4.97E-01 2.08E+00 5.70E+00 3.90E-01 1.34E-13
 p-value  1.89E-01 8.66E-01 3.87E-02 9.44E-01 5.80E-02 3.84E-02 1.06E-01 3.99E-01 NaN
 Rank  4 8 2 7 3 6 9 5 1
F15  Best  5.95E-04 1.57E-03 1.53E-03 7.83E-04 2.26E-02 7.68E-04 3.48E-03 1.83E-03 4.58E-04
 Average  4.68E-04 1.09E-03 1.25E-03 6.84E-04 6.27E-03 5.36E-04 1.63E-03 1.65E-03 3.86E-04
 Worst  3.49E-04 5.56E-04 5.55E-04 4.37E-04 7.63E-04 3.07E-04 3.73E-04 1.42E-03 3.10E-04
 STD  1.34E-04 4.64E-04 4.67E-04 1.65E-04 1.09E-02 2.64E-04 1.52E-03 1.79E-04 6.07E-05
 p-value  4.12E-02 2.33E-02 6.51E-01 1.47E-01 3.77E-01 8.18E-02 5.21E-01 6.72E-02 NaN
 Rank  2 5 6 4 9 3 7 8 1
F16  Best  -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00
 Average  -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00
 Worst  -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00 -1.03E+00
 STD  1.67E-03 7.80E-08 8.11E-05 3.20E-09 2.06E-04 9.50E-05 9.19E-08 2.54E-04 0.00E+00
 p-value  1.82E-01 1.66E-02 1.49E-01 1.65E-02 6.57E-01 1.65E-02 1.66E-02 7.85E-03 NaN
 Rank  9 3 5 2 6 7 4 8 1
F17  Best  4.00E-01 4.00E-01 4.06E-01 3.98E-01 3.98E-01 4.12E-01 4.30E-01 6.23E-01 3.98E-01
 Average  3.99E-01 3.98E-01 4.01E-01 3.98E-01 3.98E-01 4.03E-01 4.12E-01 4.64E-01 3.98E-01
 Worst  3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 4.05E-01 4.01E-01 3.98E-01
 STD  8.97E-04 9.19E-04 3.67E-03 4.40E-06 0.00E+00 6.27E-03 1.19E-02 1.07E-01 0.00E+00
 p-value  2.28E-01 1.95E-01 6.59E-01 1.56E-01 1.56E-01 1.56E-01 2.38E-01 2.96E-01 NaN
 Rank  5 4 6 3 1 7 8 9 1
F18  Best  3.10E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+01 3.01E+01 3.00E+00
 Average  3.04E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 9.75E+00 9.79E+00 3.00E+00
 Worst  3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00
 STD  4.52E-02 8.26E-05 4.54E-05 1.49E-03 2.28E-04 2.56E-16 1.35E+01 1.36E+01 3.79E-08
 p-value  1.59E-01 3.25E-01 2.04E-01 2.97E-01 1.04E-01 1.04E-01 3.56E-01 3.56E-01 NaN
 Rank  7 4 3 6 5 1 8 9 2
F19  Best  -3.85E+00 -3.86E+00 -3.85E+00 -3.86E+00 -3.86E+00 -3.83E+00 -3.85E+00 -3.75E+00 -3.86E+00
 Average  -3.85E+00 -3.86E+00 -3.85E+00 -3.86E+00 -3.86E+00 -3.85E+00 -3.85E+00 -3.80E+00 -3.86E+00
 Worst  -3.86E+00 -3.86E+00 -3.85E+00 -3.86E+00 -3.86E+00 -3.85E+00 -3.85E+00 -3.85E+00 -3.86E+00
 STD  5.73E-03 3.75E-03 1.81E-03 3.05E-03 5.19E-05 1.08E-02 7.33E-04 4.50E-02 2.56E-16
 p-value  2.23E-01 3.77E-02 2.28E-01 4.31E-02 1.89E-02 1.85E-02 2.34E-01 1.03E-01 NaN
 Rank  5 3 6 4 2 8 7 9 1
F20  Best  -3.05E+00 -3.16E+00 -2.04E+00 -3.06E+00 -2.95E+00 -2.81E+00 -3.00E+00 -1.90E+00 -3.20E+00
 Average  -3.13E+00 -3.25E+00 -2.50E+00 -3.18E+00 -3.17E+00 -2.95E+00 -3.03E+00 -2.28E+00 -3.29E+00
 Worst  -3.26E+00 -3.32E+00 -2.97E+00 -3.32E+00 -3.32E+00 -3.07E+00 -3.05E+00 -2.71E+00 -3.32E+00
 STD  9.45E-02 8.30E-02 4.22E-01 1.11E-01 1.55E-01 1.08E-01 2.67E-02 3.64E-01 5.94E-02
 p-value  5.04E-02 4.46E-03 8.13E-02 2.60E-02 6.04E-02 1.44E-03 2.01E-01 1.26E-02 NaN
 Rank  5 2 8 3 4 7 6 9 1


Table 5.Comparative results of QMRSA and other methods for F20–F23
Fun Measure Comparative algorithms
AO GWO SCA WOA DA FDA AOA RSA QMRSA
F21  Best  -1.01E+01 -2.63E+00 -8.79E-01 -5.06E+00 -2.62E+00 -4.59E+00 -1.61E+00 -5.06E+00 -5.05E+00
 Average  -1.02E+01 -5.73E+00 -3.08E+00 -5.06E+00 -5.13E+00 -7.50E+00 -2.27E+00 -5.06E+00 -8.86E+00
 Worst  -1.02E+01 -1.02E+01 -4.91E+00 -5.06E+00 -1.02E+01 -1.02E+01 -3.03E+00 -5.06E+00 -1.01E+01
 STD  1.56E-03 3.16E+00 1.99E+00 1.33E-07 3.55E+00 3.07E+00 7.39E-01 1.69E-07 2.53E+00
 p-value  8.72E-22 6.83E-01 9.44E-02 2.40E-02 9.69E-01 1.63E-01 2.85E-04 8.15E-01 NaN
 Rank  1 4 8 7 5 3 9 6 2
F22  Best  -1.04E+01 -5.09E+00 -9.06E-01 -3.72E+00 -2.70E+00 -2.77E+00 -2.44E+00 -5.09E+00 -1.04E+01
 Average  -1.04E+01 -5.09E+00 -3.45E+00 -6.06E+00 -5.80E+00 -5.51E+00 -3.31E+00 -5.09E+00 -1.04E+01
 Worst  -1.04E+01 -5.09E+00 -4.95E+00 -1.04E+01 -1.03E+01 -1.04E+01 -4.29E+00 -5.09E+00 -1.04E+01
 STD  1.21E-02 1.14E-06 1.90E+00 2.95E+00 3.18E+00 3.41E+00 8.00E-01 8.35E-07 1.41E-03
 p-value  1.46E-16 3.63E-22 1.35E-01 5.32E-01 6.72E-01 8.14E-01 4.31E-03 8.81E-01 NaN
 Rank  2 6 8 3 4 5 9 7 1
F23  Best  -5.13E+00 -1.05E+01 -9.41E-01 -2.80E+00 -2.81E+00 -2.87E+00 -1.23E+00 -5.13E+00 -5.13E+00
 Average  -5.13E+00 -1.05E+01 -3.16E+00 -5.83E+00 -4.01E+00 -6.94E+00 -3.83E+00 -5.13E+00 -5.13E+00
 Worst  -5.13E+00 -1.05E+01 -4.94E+00 -1.03E+01 -5.18E+00 -1.05E+01 -5.62E+00 -5.13E+00 -5.13E+00
 STD  2.04E-07 1.37E-03 1.72E+00 3.16E+00 1.35E+00 4.17E+00 1.86E+00 1.95E-06 2.04E-07
 p-value  1.15E-21 2.82E-22 6.21E-02 6.71E-01 1.48E-01 4.17E-01 2.14E-01 4.01E-01 NaN
 Rank  5 1 9 3 7 2 8 4 5
 Mean ranking  4.5 4 6.1 4.2 4.6 4.9 7.5 7.4 1.6
 Final ranking  4 2 7 3 5 6 9 8 1


Fig. 4. Convergence curves of QMRSA and other methods for (a) F1, (b) F3, (c) F5, and (d) F23.


Fig. 5. Qualitative results for the studied problems: (a) F1, (b) F6, (c) F10, and (d) F11.


Fig. 4 shows an example of convergence curves of the proposed QMRSA using the conventional benchmark functions (F1–F23). Due to the problem of local optima, the integration between RSA and quantum mutation has been proposed to boost the RSA’s exploitation searchability. As shown in Fig. 4, the convergence rate of the developed method is better than other methods in most of the comparative problems. This indicates the high ability of the developed QMRSA to equilibrium between exploration and exploitation phases. As demonstrated in Fig. 4, QMRSA outperformed the traditional RSA in all investigated scenarios. Furthermore, QMRSA outperformed all other comparative approaches in solving this type of problem with the most aggressive convergence acceleration. As previously established, the QMRSA is not only resilient and successful in delivering the best results, but it also has a faster convergence rate than other comparable approaches.
The diversity of QMRSA and RSA has been evaluated as in Fig. 5 for F1, F2, F6, F7, F10, and F11. We can be seen from these figures that the diversity of the developed QMRSA is better than traditional RSA, especially with excess iterations. In addition, it can be noticed that the diversity of solutions of QMRSA is preserved during the searching process, especially with the excess number of iterations. In contrast, the diversity of traditional RSA is decreased especially at the final third part of iterations.

Data Clustering Problem

To demonstrate how the new method can be utilized in practice, QMRSA is applied to a data clustering problem. In addition, a performance comparison with other well-known data clustering methods is presented in this section.
4.2.1 Background and problem
This experiment uses eight well-known datasets: Glass, Cancer, Iris, CMC, Seeds, Vowels, Heart, and Water [38, 39]. The attributes of these datasets are listed in Table 6.
Table 6.UCI benchmark datasets
Dataset Features no. Instances no. Classes no.
Cancer  9 683 2
CMC  10 1473 3
Glass  9 214 7
Iris  4 150 3
Seeds  7 210 3
Heart  13 270 2
Vowels  6 871 3
Water  13 178 3


4.2.2 Results and discussion
The results of QMRSA were compared to those of the AO, GWO, SCA, WOA, DA, FDA, AOA, and RSA, as listed in Table 7. In addition, for each dataset, the Wilcoxon rank-sum test was used to see a significant difference between the QMRSA and the other methods. The proposed QMRSA technique produced the smallest average of the fitness value, and it was rated the first, according to Table 7. In contrast, the FDA algorithm was ranked second, followed by RSA, GWO, AO, WOA, AOA, DA, and SCA. In the best measure, the QMRSA performed similarly to both FDA and RSA, achieving the same values and placing second and third rank. The QMRSA algorithm achieved the best results for the Cancer, Glass, CMC, and Seeds dataset. The RSA has the second-best results for the Cancer, CMC, Iris, Vowels, and Wine dataset. While FDA achieved the best value for Iris, Vowels, and Wine datasets. The most stable algorithm is AO at Cancer, followed by QMRSA, RSA, which allocated the second and third ranks. At CMC, the SCA, WOA, and AOA. Moreover, by analyzing the algorithm results for the Cancer datasets, it can be observed that SCA and WOA are the worst algorithms that provided the highest average of fitness value. The QMRSA has the smallest value, followed by RSA and FDA, which have been allocated to the second and third ranks respectively. Moreover, these results are more stable than other methods’ results except for AO. In the case of CMC, the RSA provided results better than other methods according to the average results. However, the developed QMRSA is still allocated to the first rank. In addition, the STD of QMRSA is better compared to the STD of AOA and RSA. In case of Glass, FDA is the best algorithm that can determine the center of clusters more precisely than other methods except for the QMRSA. In contrast, SCA is the worst one among the competitive methods. At Iris, Statlog (Heart), Vowels, and Wine datasets, the FDA is allocated to the first rank which provides results better than other methods—followed by QMRSA at Iris, Statlog (Heart), and Wine that, has average better than the other competitive methods. However, the QMRSA, at Seeds dataset can determine the best centroid of each cluster, and this is observed from the average of fitness value that has the smallest value according to it.
Furthermore, the Friedman test determines the significant difference between the proposed and the other methods. It can be noticed that the proposed QMRSA allocates the best rank among all the competitive methods, followed by FDA and traditional RSA, which allocate the second and third rank, respectively. On the other hand, according to the results obtained in this study, the SCA is considered the worst algorithm. In addition, the convergence curves of the developed method and another method that is given in Fig. 6. By analyzing these results, it can be seen that the developed QMRSA can reach the smallest fitness value faster than other methods overall the test datasets.
From the previous results and their discussions, the high ability of the developed QMRSA method to solve different global optimization problems and the data clustering problems can be noticed. The high performance among the tested problems still suffers from some limitations, such as time computation that can be fixed by using the concepts of parallel computing.
Fig. 6. . Convergence behavior of the comparative algorithms using the tested data clustering problems: (a) Cancer, (b) CMC, (c) Glass, and (d) Iris datasets.


Table 7. Comparative results of QMRSA and other clustering methods using eight datasets
Metric Comparative algorithms
AO GWO SCA WOA DA FDA AOA RSA QMRSA
Cancer Worst 2.93E+03 2.90E+03 3.44E+03 3.34E+03 3.48E+03 7.47E+02 3.30E+03 5.24E+02 3.46E+02
Average 2.80E+03 2.68E+03 3.09E+03 3.15E+03 3.44E+03 5.48E+02 3.25E+03 3.56E+02 1.92E+02
Best 2.67E+03 2.40E+03 2.88E+03 2.80E+03 3.39E+03 2.50E+02 3.18E+03 2.44E+02 5.84E+01
STD 1.04E+02 2.50E+02 2.07E+02 2.40E+02 3.44E+01 1.99E+02 4.97E+01 1.13E+02 1.12E+02
p-value 4.02E-10 6.17E-08 5.30E-09 1.14E-08 7.96E-12 9.71E-02 1.87E-11 4.98E-02 NaN
Rank 5 4 6 7 9 3 8 2 1
CMC Worst 3.30E+02 3.10E+02 3.35E+02 3.34E+02 3.35E+02 9.70E+01 3.34E+02 7.61E+01 8.35E+01
Average 3.28E+02 3.07E+02 3.33E+02 3.33E+02 3.33E+02 9.24E+01 3.33E+02 7.15E+01 7.04E+01
Best 3.25E+02 3.02E+02 3.32E+02 3.31E+02 3.30E+02 8.44E+01 3.31E+02 6.80E+01 4.96E+01
STD 2.45E+00 2.81E+00 1.34E+00 1.37E+00 1.95E+00 1.28E+01 1.09E+00 3.59E+00 4.87E+00
p-value 1.49E-13 3.99E-13 6.67E-14 6.89E-14 9.21E-14 7.15E-03 6.20E-14 5.75E-05 NaN
Rank 5 4 9 8 6 3 7 2 1
Glass Worst 3.11E+01 3.32E+01 3.49E+01 3.46E+01 3.51E+01 4.49E+00 3.43E+01 4.22E+00 1.65E+00
Average 3.02E+01 3.00E+01 3.47E+01 3.40E+01 3.48E+01 1.52E+00 3.37E+01 3.57E+00 1.05E+00
Best 2.84E+01 2.63E+01 3.46E+01 3.37E+01 3.44E+01 0.00E+00 3.24E+01 3.15E+00 7.14E-01
STD 1.10E+00 2.47E+00 1.34E-01 3.76E-01 2.80E-01 1.83E+00 7.62E-01 3.85E-01 4.20E-01
P-value 2.57E-11 1.12E-08 2.91E-15 2.51E-14 8.44E-15 4.04E-02 8.54E-13 9.26E-06 NaN
Rank 5 4 8 7 9 2 6 3 1
Iris Worst 2.08E+01 1.63E+01 2.46E+01 2.43E+01 2.44E+01 6.27E+00 2.39E+01 5.18E+00 5.18E+00
Average 1.91E+01 1.42E+01 2.37E+01 2.32E+01 2.37E+01 3.48E+00 2.37E+01 3.82E+00 3.82E+00
Best 1.79E+01 1.27E+01 2.21E+01 2.23E+01 2.28E+01 2.02E+00 2.35E+01 2.53E+00 2.53E+00
STD 1.12E+00 1.72E+00 1.04E+00 7.72E-01 7.98E-01 1.71E+00 1.55E-01 8.19E-01 9.75E-01
p-value 1.30E-08 2.53E-06 1.22E-09 5.12E-10 4.52E-10 7.12E-01 6.48E-11 1.01E-02 NaN
Rank 5 4 9 6 8 1 7 2 2
Seeds Worst 4.01E+01 3.80E+01 5.00E+01 4.99E+01 5.03E+01 1.33E+01 4.89E+01 1.16E+01 4.95E+00
Average 3.87E+01 3.62E+01 4.92E+01 4.81E+01 4.98E+01 7.71E+00 4.85E+01 9.41E+00 4.09E+00
Best 3.77E+01 3.41E+01 4.76E+01 4.66E+01 4.93E+01 4.80E+00 4.81E+01 8.27E+00 2.58E+00
STD 9.81E-01 1.79E+00 1.05E+00 1.52E+00 3.44E-01 3.74E+00 2.97E-01 9.45E-01 1.33E+00
p-value 1.82E-10 3.94E-09 1.92E-11 9.69E-11 3.20E-12 3.66E-01 3.90E-12 8.52E-05 NaN
Rank 5 4 8 6 9 2 7 3 1
Statlog (Heart) Worst 1.37E+03 1.17E+03 1.68E+03 1.61E+03 1.63E+03 5.78E+01 1.63E+03 7.24E+01 3.97E+01
Average 1.29E+03 9.24E+02 1.62E+03 1.58E+03 1.54E+03 2.05E+01 1.55E+03 3.68E+01 2.40E+01
Best 1.16E+03 7.55E+02 1.49E+03 1.52E+03 1.28E+03 1.25E+01 1.46E+01 1.16E+01 1.15E+01
STD 8.74E+01 1.68E+02 7.74E+01 3.54E+01 1.48E+02 2.84E+01 6.14E+01 1.57E+01 3.15E+01
p-value 1.55E-09 2.71E-06 1.07E-10 1.44E-12 1.84E-08 4.14E-01 3.25E-11 4.41E-01 NaN
Rank 5 4 9 8 6 1 7 3 2
Vowels Worst 1.42E+02 1.36E+02 1.53E+02 1.52E+02 1.53E+02 1.76E+01 1.53E+02 2.30E+01 3.58E+01
Average 1.40E+02 1.31E+02 1.53E+02 1.51E+02 1.53E+02 1.27E+01 1.52E+02 2.09E+01 3.00E+01
Best 1.36E+02 1.25E+02 1.53E+02 1.50E+02 1.52E+02 9.94E+00 1.52E+02 1.90E+01 2.07E+01
STD 2.60E+00 4.88E+00 1.40E-01 6.06E-01 3.93E-01 3.13E+00 3.08E-01 1.78E+00 6.16E+00
p-value 3.40E-10 2.25E-09 6.97E-11 8.12E-11 7.24E-11 5.11E-04 7.35E-11 1.34E-02 NaN
Rank 5 4 9 6 8 1 7 2 3
Wine Worst 3.33E+03 2.84E+03 3.93E+03 3.85E+03 3.98E+03 9.09E+02 3.84E+03 6.76E+02 6.76E+02
Average 3.02E+03 2.46E+03 3.83E+03 3.74E+03 3.86E+03 4.64E+02 3.79E+03 4.84E+02 4.84E+02
Best 2.73E+03 2.15E+03 3.60E+03 3.59E+03 3.65E+03 2.20E+02 3.70E+03 3.51E+02 3.51E+02
STD 2.15E+02 3.45E+02 1.33E+02 1.11E+02 1.48E+02 2.84E+02 6.02E+01 1.14E+02 1.35E+02
p-value 1.69E-08 2.27E-06 1.83E-10 1.22E-10 2.69E-10 8.90E-01 2.84E-11 4.23E-02 NaN
Rank 5 4 8 6 9 1 7 2 2
Mean ranking 5.00E+00 4.00E+00 8.25E+00 6.75E+00 8.00E+00 1.75E+00 7.00E+00 2.38E+00 1.63E+00
Final ranking 5 4 9 6 8 2 7 3 1



Conclusion and Future Work

This paper has been presented a modified version of a new metaheuristic technique named RSA. This modification depends on using the strength of QM to enhance the ability of RSA to enhance its ability to balance between exploration and exploitation. This will improve the diversity of the solution and increase the convergence rate reflected in the quality of the final solution. To assess the performance of the developed method, a set of experiments have been conducted using standard benchmark functions which have different characteristics. In addition, the results of the proposed QMRSA have been compared with AO, GWO, SCA, WOA, DA, AOA, and traditional RSA. The results show the high ability of the proposed QMRSA to find the best solution over the tested function compared with other methods. In addition, to evaluate the applicability of QMRSA, it has been used as a clustering technique. Since the clustering problem is considered an NP-hard problem, it has several real-world applications in IoT, cloud computing, data mining, etc. The proposed QMRSA has been applied to eight datasets, and the results have been compared with the same algorithms used in the first experiment. It has been observed from clustering results and the statistical Friedman test the high ability of QMRSA to determine the number of clusters and the central points. Moreover, the main advantage of the proposed method is that it can find new best solutions with a high accuracy rate. According to the obtained results, the improved QMRSA can be used for photovoltaic, task scheduling, engineering design problems, and feature selection.


Author’s Contributions

Conceptualization, LA. Methodology, LA. Investigation and methodology, RA, MM, SC, MS, LA, MAE. Supervision, LA. Writing of the original draft, RA, MM, SC, MS, LA, MAE. Software, LA. Validation, LA. Visualization, RA, MM, SC, MS, MAE.


Funding

This research project was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R239), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.


Competing Interests

The authors declare that they have no competing interests.


Author Biography

Author
Name : Almodfer Rolla
Affiliation : School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China
Biography : Almodfer Rolla is an assistant professor at School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China. Her main research interests are optimization and machine learning.

Author
Name : Mohammed Mudhsh
Affiliation : School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China
Biography : Mohammed Mudhsh is an assistant professor at School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China. His main research interests are artificial intelligence, optimization and machine learning

Author
Name : Samia Chelloug
Affiliation : Information Technology Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University
Biography : Samia Chelloug is an assistant professor at Information Technology Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University. Her main research interests are optimization and machine learning, ad hoc networks, concave programming, and network coding.

Author
Name : Mohammad Shehab
Affiliation : Information Technology, The World Islamic Sciences and Education University, Amman, Jordan
Biography : Mohammad Shehab received his B.Sc. from Al-Zaytoonah University of Jordan, Software Engineering, Jordan, in 2009. He received the master degree from the Universiti Sains Malaysia, Computer Science/ Artificial Intelligence, in 2012. Also, He received a Ph.D. degree from Universiti Sains Malaysia, Computer Science/ Artificial Intelligence and Software Engineering, in 2018. Dr. Shehab's research focuses on Metaheuristic algorithms, particularly in the areas of population modeling and parameter control.

Author
Name : Laith Abualigah
Affiliation : Faculty of Computer Sciences and Informatics, Amman Arab University, 11953 Amman, Jordan.
Biography : Laith Abualigah is an Assistant Professor at the Computer Science Department, Amman Arab University, Jordan. He is also a distinguished researcher at the School of Computer Science, Universiti Sains Malaysia, Malaysia. He received his first degree from Al-Albayt University, Computer Information System, Jordan, in 2011. He earned a Master’s degree from Al-Albayt University, Computer Science, Jordan, in 2014. He received a Ph.D. degree from the School of Computer Science in Universiti Sains Malaysia (USM), Malaysia, in 2018. According to the report published by Stanford University in 2020, Abualigah is one of the 2% influential scholars, which depicts the 100,000 top scientists in the world. Abualigah has published more than 100 journal papers and books, which collectively have been cited more than 4300 times (H-index = 32). His main research interests focus on Arithmetic Optimization Algorithm (AOA), Bio-inspired Computing, Nature-inspired Computing, Swarm Intelligence, Artificial Intelligence, Meta-heuristic Modeling, and Optimization Algorithms, Evolutionary Computations, Information Retrieval, Text clustering, Feature Selection, Combinatorial Problems, Optimization, Advanced Machine Learning, Big data, and Natural Language Processing. Abualigah currently serves as an associate editor of the Journal of Cluster Computing (Springer), the Journal of Soft Computing (Springer), and Journal of King Saud University - Computer and Information Sciences (Elsevier).

Author
Name : Mohamed Abd Elaziz
Affiliation : Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
Biography : Mohamed Abd Elaziz received the B.S. and M.S. degrees in computer science and the Ph.D. degree in mathematics and computer science from Zagazig University, Egypt, in 2008, 2011, and 2014, respectively. From 2008 to 2011, he was an Assistant Lecturer with the Department of Computer Science. He is currently an Associate Professor with Zagazig University. He has authored or coauthored more than 100 articles. His research interests include metaheuristic technique, cloud computing machine learning, signal processing, image processing, and evolutionary algorithms.br/>


References

[1] M. H. Nadimi-Shahraki, A. Fatahi, H. Zamani, S. Mirjalili, L. Abualigah, and M. Abd Elaziz, “Migration-based moth-flame optimization algorithm,” Processes, vol. 9, no. 12, article no. 2276, 2021. https://doi.org/10.3390/pr9122276
[2] M. H. Nadimi-Shahraki, S. Taghian, S. Mirjalili, A. A. Ewees, L. Abualigah, and M. Abd Elaziz, “MTV-MFO: multi-trial vector-based moth-flame optimization algorithm,” Symmetry, vol. 13, no. 12, article no. 2388, 2021. tps://doi.org/10.3390/sym13122388
[3] M. Shehab, A. T. Khader, and M. A. Al-Betar, “A survey on applications and variants of the cuckoo search algorithm,” Applied Soft Computing, vol. 61, pp. 1041-1059, 2017.
[4] R. Eberhart and J. Kennedy, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942-1948.
[5] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, and A. H. Gandomi, “The arithmetic optimization algorithm,” Computer Methods in Applied Mechanics and Engineering, vol. 376, article no. 113609, 2021. https://doi.org/10.1016/j.cma.2020.113609
[6] A. Eid, S. Kamel, and L. Abualigah, “Marine predators algorithm for optimal allocation of active and reactive power resources in distribution networks,” Neural Computing and Applications, vol. 33, no. 21, pp. 14327-14355, 2021.
[7] R. A. Zitar, L. Abualigah, and N. A. Al-Dmour, “Review and analysis for the red deer algorithm,” Journal of Ambient Intelligence and Humanized Computing, 2021. https://doi.org/10.1007/s12652-021-03602-1
[8] L. Abualigah, D. Yousri, M. Abd Elaziz, A. A. Ewees, M. A. Al-Qaness, and A. H. Gandomi, “Aquila optimizer: a novel meta-heuristic optimization algorithm,” Computers & Industrial Engineering, vol. 157, article no. 107250, 2021. https://doi.org/10.1016/j.cie.2021.107250
[9] A. M. AlRassas, M. A. Al-qaness, A. A., Ewees, S. Ren, M. Abd Elaziz, R. Damasevicius, and T. Krilavicius, “Optimized ANFIS model using Aquila Optimizer for oil production forecasting,” Processes, vol. 9, no. 7, article no. 1194, 2021. https://doi.org/10.3390/pr9071194
[10] S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46-61, 2014.
[11] E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary ant lion approaches for feature selection,” Neurocomputing, vol. 213, pp. 54-65, 2016.
[12] S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51-67, 2016.
[13] A. A. Ewees, M. A. Al-qaness, and M. Abd Elaziz, “Enhanced salp swarm algorithm based on firefly algorithm for unrelated parallel machine scheduling with setup times,” Applied Mathematical Modelling, vol. 94, pp. 285-305, 2021.
[14] M. Abd El Aziz, A. A. Ewees, and A. E. Hassanien, “Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation,” Expert Systems with Applications, vol. 83, pp. 242-256, 2017.
[15] L. Abualigah, M. Abd Elaziz, P. Sumari, Z. W. Geem, and A. H. Gandomi, “Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer,” Expert Systems with Applications, vol. 191, article no. 116158, 2022. https://doi.org/10.1016/j.eswa.2021.116158
[16] L. Abualigah, M. Shehab, M. Alshinwan, S. Mirjalili, and M. A. Elaziz, “Ant lion optimizer: a comprehensive survey of its variants and applications,” Archives of Computational Methods in Engineering, vol. 28, no. 3, pp. 1397-1416, 2021.
[17] M. Alshinwan, L. Abualigah, M. Shehab, M. A. Elaziz, A. M. Khasawneh, H. Alabool, and H. A. Hamad, “Dragonfly algorithm: a comprehensive survey of its results, variants, and applications,” Multimedia Tools and Applications, vol. 80, no. 10, pp. 14979-15016, 2021.
[18] K. B. Tanghe, “On ‘The Origin of Species’: the story of Darwin's title,” Notes and Records: the Royal Society Journal of the History of Science, vol. 73, no. 1, pp. 83-100, 2019.
[19] J. McFadden and J. Al-Khalili, “A quantum mechanical model of adaptive mutation,” Biosystems, vol. 50, no. 3, pp. 203-211, 1999.
[20] K. H. Han and J. H. Kim, “Quantum-inspired evolutionary algorithm for a class of combinatorial optimization,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 6, pp. 580-593, 2002.
[21] A. E. Azzaoui, P. K. Sharma, and J. H. Park, “Blockchain-based delegated Quantum Cloud architecture for medical big data security,” Journal of Network and Computer Applications, vol. 198, article no. 103304, 2022. https://doi.org/10.1016/j.jnca.2021.103304
[22] A. E. Azzaoui, T. W. Kim, Y. Pan, and J. H. Park, “A quantum approximate optimization algorithm based on blockchain heuristic approach for scalable and secure smart logistics systems,” Human-centric Computing and Information Sciences, vol. 11, article no. 46, 2021. https://doi.org/10.22967/HCIS.2021.11.046
[23] A. Kaveh, M. Kamalinejad, and H. Arzani, “Quantum evolutionary algorithm hybridized with enhanced colliding bodies for optimization,” Structures, vol. 28, pp. 1479-1501, 2020.
[24] A. Gepp and P. Stocks, “A review of procedures to evolve quantum algorithms,” Genetic Programming and Evolvable Machines, vol. 10, no. 2, pp. 181-228, 2009.
[25] A. Malossini, E. Blanzieri, and T. Calarco, “Quantum genetic optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 231-241, 2008.
[26] J. Liu, W. Xu, and J. Sun, “Quantum-behaved particle swarm optimization with mutation operator,” in Proceedings of 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI), Hong Kong, 2005.
[27] S. Nama, S. Sharma, A. K. Saha, and A. H. Gandomi, “A quantum mutation-based backtracking search algorithm,” Artificial Intelligence Review, vol. 55, no. 4, pp. 3019-3073, 2022.
[28] L. Jiao, Y. Li, M. Gong, and X. Zhang, “Quantum-inspired immune clonal algorithm for global optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 5, pp. 1234-1253, 2008.
[29] H. Wang, J. Liu, J. Zhi, and C. Fu, “The improvement of quantum genetic algorithm and its application on function optimization,” Mathematical Problems in Engineering, vol. 2013, article no. 730749, 2013. https://doi.org/10.1155/2013/730749
[30] M. Xu, L. Zhang, B. Du, L. Zhang, Y. Fan, and D. Song, “A mutation operator accelerated quantum-behaved particle swarm optimization algorithm for hyperspectral endmember extraction,” Remote Sensing, vol. 9, no. 3, article no. 197, 2017. https://doi.org/10.3390/rs9030197
[31] A. Dey, S. Dey, S. Bhattacharyya, J. Platos, and V. Snasel, “Quantum inspired meta‐heuristic approaches for automatic clustering of colour images,” International Journal of Intelligent Systems, vol. 36, no. 9, pp. 4852-4901, 2021.
[32] M. Abd Elaziz, A. H. Elsheikh, D. Oliva, L. Abualigah, S. Lu, and A. A. Ewees, “Advanced metaheuristic techniques for mechanical design problems,” Archives of Computational Methods in Engineering, vol. 29, no. 1, pp. 695-716, 2022.
[33] Y. Wang and W. Wang, “Quantum-inspired differential evolution with grey wolf optimizer for 0-1 knapsack problem,” Mathematics, vol. 9, no. 11, article no. 1233, 2021. https://doi.org/10.3390/math9111233.
[34] M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58-73, 2002.
[35] L. Wang, C. Wang, X. Zhang, T. Lan, and J. Li, “S-AT GCN: spatial-attention graph convolution network based feature enhancement for 3D object detection,” 2021 [Online]. Available: https://arxiv.org/abs/2103.08439
[36] F. Van den Bergh and A. P. Engelbrecht, “A study of particle swarm optimization particle trajectories,” Information Sciences, vol. 176, no. 8, pp. 937-971, 2006.
[37] L. Abualigah, A. H. Gandomi, M. A. Elaziz, A. G. Hussien, A. M. Khasawneh, M. Alshinwan, and E. H. Houssein, “Nature-inspired optimization algorithms for text document clustering: a comprehensive analysis,” Algorithms, vol. 13, no. 12, article no. 345, 2020. https://doi.org/10.3390/a13120345.
[38] L. Abualigah, A. H. Gandomi, M. A. Elaziz, H. A. Hamad, M. Omari, M. Alshinwan, and A. Khasawneh, “Advances in meta-heuristic optimization algorithms in big data text clustering,” Electronics, vol. 10, no. 2, article no. 101, 2021. https://doi.org/10.3390/electronics10020101.

About this article
Cite this article

Rolla Almodfer1 , Mohammed Mudhsh1 , Samia Chelloug2 , Mohammad Shehab3 , Laith Abualigah4,5, and Mohamed Abd Elaziz6,7,8, Quantum Mutation Reptile Search Algorithm for Global Optimization and Data Clustering, Article number: 12:30 (2022) Cite this article 1 Accesses

Download citation
  • Recived20 January 2022
  • Accepted14 February 2022
  • Published30 June 2022
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords