IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection
Next Article in Journal
Emergence of Inequality in Income and Wealth Dynamics
Next Article in Special Issue
Water Quality Prediction Based on Machine Learning and Comprehensive Weighting Methods
Previous Article in Journal
Topological Methods for Studying Contextuality: N-Cycle Scenarios and Beyond
Previous Article in Special Issue
Chinese Few-Shot Named Entity Recognition and Knowledge Graph Construction in Managed Pressure Drilling Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection

1
College of Computer Science and Technology, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of Education, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1128; https://doi.org/10.3390/e25081128
Submission received: 16 June 2023 / Revised: 23 July 2023 / Accepted: 26 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Entropy in Machine Learning Applications)

Abstract

:
Feature selection is a crucial process in machine learning and data mining that identifies the most pertinent and valuable features in a dataset. It enhances the efficacy and precision of predictive models by efficiently reducing the number of features. This reduction improves classification accuracy, lessens the computational burden, and enhances overall performance. This study proposes the improved binary golden jackal optimization (IBGJO) algorithm, an extension of the conventional golden jackal optimization (GJO) algorithm. IBGJO serves as a search strategy for wrapper-based feature selection. It comprises three key factors: a population initialization process with a chaotic tent map (CTM) mechanism that enhances exploitation abilities and guarantees population diversity, an adaptive position update mechanism using cosine similarity to prevent premature convergence, and a binary mechanism well-suited for binary feature selection problems. We evaluated IBGJO on 28 classical datasets from the UC Irvine Machine Learning Repository. The results show that the CTM mechanism and the position update strategy based on cosine similarity proposed in IBGJO can significantly improve the Rate of convergence of the conventional GJO algorithm, and the accuracy is also significantly better than other algorithms. Additionally, we evaluate the effectiveness and performance of the enhanced factors. Our empirical results show that the proposed CTM mechanism and the position update strategy based on cosine similarity can help the conventional GJO algorithm converge faster.

1. Introduction

Machine learning and data mining have expanded in many fields, including active matter, molecular and materials science, nature language process (NLP) and biomedicine [1,2,3]. To create more complex machine learning models, many datasets with high-dimensional feature spaces are created [4,5]. However, as the dimensionality of data increases, there are more and more redundant features, and it becomes more difficult to train models with high generalization ability. Therefore, it is necessary to perform feature selection to solve these problems. Feature selection is a critical step in data mining and machine learning that involves identifying the most relevant and useful features within a dataset or set of characteristics. Predictive models can be more effective and precise by eliminating redundant or unnecessary features. This improves classification accuracy and helps algorithms generalize better to new data, prevent overfitting, and produce more accurate predictions. In addition to these benefits, feature selection can help uncover hidden relationships within the data and provide more insightful explanations for predictive models.
Unsupervised Feature Selection methods can be classified into three main approaches, similar to supervised and semi-supervised feature selection [6,7]. These approaches are determined by the feature selection strategy employed, including filter, wrapper, and embedded methods [8,9]. In particular, filter methods rank the features according to the calculated scores by using a statistical metric to assign each feature a meaningful score. However, they might use up more computational resources. In the wrapper method, the selection subset obtained by the selection algorithm is evaluated using a classifier, and feature selection is guided by the feedback of the classifier [10]. As a result, the accuracy of the wrapper method is greater than the filtering method, as indicated by [11]. Furthermore, feature selection is regarded as a component of the machine learning training phase, which makes embedding methods a particular case of packing methods [12,13].
Meanwhile, it is possible to think of feature selection as a global group optimization problem. In particular, some of the original dataset answers the optimization problem, which can be resolved using exhaustive and heuristic search techniques [14]. In contrast to heuristic search methods, exhaustive search methods typically have higher computational costs, especially for high-dimensional datasets [4]. Using meta-heuristic search techniques may be a more practical way to solve the feature selection problem [14]. Therefore, it is essential to choose features using efficient methods.
Evolutionary algorithms (EAs) have recently been used to solve feature selection challenges for the global search capacity of feature selection methods. Numerous researchers have used various population intelligence techniques to address feature selection issues, including the cuckoo search (CS) [15], genetic algorithm (GA) [11], particle swarm optimization (PSO) [12], whale optimization algorithm (WOA) [16], sparrow search algorithm (SSA) [17], harris hawks optimization (HHO) [18,19] and variants of these algorithms, for dealing with the feature selection problems. For instance, Hegazy et al. [20] attempt to enhance the basic SSA structure to increase the solution accuracy, reliability, and convergence speed. Additionally, Behrouz et al. propose an unsupervised probabilistic feature selection algorithm using ant colony optimization [21].
The golden jackal optimization (GJO) algorithm is one of the EAs, and research has demonstrated that it is both efficient and simple to apply [22]. Nevertheless, despite its extensive use, the conventional GJO algorithm may have certain drawbacks, such as insufficiently exploiting issue areas. In addition, the no free lunch (NFL) theory contends that no single algorithm is capable of solving every optimization problem [23]. The conventional GJO algorithm was created to solve continuous optimization issues. There might be better choices for feature selection tasks involving binary solution spaces. These circumstances drive us to improve the conventional GJO to make it better suited for feature selection tasks.
The main contributions of this paper are summarized as follows:
  • We aim to simultaneously reduce the number of selected features and improve the classification accuracy. Specifically, we design a fitness function to achieve these optimization objectives jointly.
  • We propose an improved binary golden jackal optimization algorithm (IBGJO) to solve the designed fitness function. First, IBGJO introduces a chaotic tent map to improve the exploitation capability of conventional GJO. Second, a new position-updating mechanism by cosine similarity is proposed to balance the exploitation and exploration capabilities of the algorithm. Finally, a binarization strategy is introduced to transfer the continuous solution space to the binary ones, making it suitable for dealing with feature selection solutions.
  • We conduct various experiments to assess the performance of the proposed IBGJO with the comparative algorithms on 28 classical UC Irvine (UCI) Machine Learning Repository datasets in terms of average fitness value, average classification accuracy, average CPU running time and average number of selected features.
The rest of this paper is organized as follows. Section 2 gives a brief overview of the related work. Section 3 designs the formulated fitness function of feature selection. Section 4 gives the details of IBGJO. Section 5 presents the experimental results. Finally, Section 6 concludes this paper and suggests the future works.

2. Related Work

The significance of wrapper-based selection techniques in feature selection optimizations cannot be overlooked [24,25]. These methods operate on the premise of treating feature selection as a black box, and employ meta-heuristic algorithms and classifiers to obtain the optimal subset [26]. Numerous classical meta-heuristic algorithms have undergone modifications to tackle the feature selection problem, such as binary bat algorithm (BBA) [27], bare bones particle swarm optimization algorithm (BPSO) [12], binary gray wolf optimization algorithm (BGWO) [28], binary gravitational search algorithm (BGSA) [29], and so on.
In recent times, an increasing number of novel algorithms have been proposed to enhance the optimization of feature selection problems based on the wrapper approach, due to their vital significance. For instance, Al-Tashi et al. [30] examine binary optimization utilizing hybrid grey wolf optimization for feature selection in their paper. To resolve feature selection issues, a binary version of the hybrid grey wolf optimization (GWO) and PSO is suggested. In 2019 [31], binary variations of the butterfly optimization algorithm (BOA) are suggested and utilized to choose the best feature subset for classification purposes. A self-adaptive particle swarm optimization (SaPSO) approach is suggested by Xue et al., especially for large-scale feature selection [32]. The two-archive multi-objective artificial bee colony algorithm (TMABC-FS) is a multi-objective feature selection approach that Zhang et al. investigate to satisfy diverse decision-makers’ criteria [33]. To increase the predictability of the hospitalization expense model, a novel method proposed based on the GA for feature selection and parameter optimization of the support vector machine (SVM) in 2019 [34]. Aimed at finding distinguishing characteristics across several class labels, Zhang et al. [35] offer an embedded multi-label feature selection approach with manifold regularization. To develop a more affordable computational model for voice analysis-based emotion categorization, Dey et al [36] offer a meta-heuristic feature selection (FS) method employing a hybrid of equilibrium optimization (EO) and golden ratio optimization (GRO) algorithms. Wang and Chen [37] propose an improved whale optimization algorithm (CMWOA) that integrates chaotic and multi-swarm techniques to accomplish parameter optimization and feature selection simultaneously for SVM in 2020. For feature selection issues in medical diagnosis, a hybrid crow search optimization method integrated with chaos theory and fuzzy c-means algorithm was proposed in 2020 [38].
Several leading-edge researchers have focused on GJO algorithms for optimizing feature selection. Initially designed to address continuous problems, GJO requires transfer functions to convert it into a binary form (BGJO) [39] that can effectively handle feature selection optimizations. While some studies have made strides in addressing feature selection challenges across a variety of contexts, it is important to note that the NFL [23] theorem holds that no method can solve every optimization problem. Furthermore, none of the aforementioned research has identified optimal subsets of variables across all datasets tested. Nonetheless, given the strong potential of conventional GJO in this area, our aim in this study is to incorporate several enhanced factors into conventional GJO with the objective of improving the efficiency of feature selection optimizations.

3. Problem Formulation

In this study, feature selection aims to minimize the number of chosen features while improving the classification accuracy, which can be defined as a multi-objective optimization problem [40]. To consider the two objectives of optimization, we constructed the following fitness function:
f F i t n e s s = α · E r + β · F s F a
where F s and F a stand for the number of chosen features and the total number of features, respectively, and E r is the classification error rate of a certain classifier. Additionally, the weights used to balance these two goals are α and β .
The formulated feature selection problem has a nonlinear discrete search space with numerous potential local minimum points. As a result, we suggest the binary IBGJO algorithm to address the feature selection problem.

4. Proposed Improved Golden Jackal Optimization Algorithm for Feature Selection

The following section provides a succinct overview of the conventional GJO algorithm and its key principles. Additionally, the conventional GJO algorithm is reviewed before delving into a comprehensive discussion of the proposed IBGJO algorithm’s formulation.

4.1. Conventional Golden Jackal Optimization

The conventional GJO algorithm is inspired by the hunting behavior of golden jackal pairs and adopts a swarm-based approach [22]. Figure 1 shows the entire foraging process of the golden jackal pair. The whole foraging process includes searching for prey, tracking and surrounding of prey, attacking prey and capturing prey. This section delves into the mathematical modeling of the conventional GJO algorithm.

4.1.1. Search Space Formulation

The initial solution of the golden jackal optimization algorithm is also uniformly distributed on the search space, which is similar to other metaheuristic methods, and its distribution is as follows:
Y 0 = Y min + r a n d × u b l b
where Y 0 represents the initial randomized population, and u b and l b denote the upper and lower boundaries of the decision variables. Moreover, r a n d is a random number that falls within the range of [ 0 , 1 ] . The initialization procedure involves generating a foundational P r e y matrix, with the male and female jackals occupying the first and second positions, correspondingly. The composition of the P r e y is illustrated as follows:
P r e y = Y 1 , 1 Y 1 , 2 Y 1 , d Y 2 , 1 Y 2 , 2 Y 2 , d : : : : Y n , 1 Y n , 2 Y n , d
where Y i j stands for the i-th prey’s j-th dimension. There are n preys and d variables in total. The prey position can be regarded as an optimal solution. During optimization, an objective function is used to assess the fitness of each prey, with the resulting fitness values being compiled into a matrix:
F O A = f ( Y 1 , 1 ; Y 1 , 2 ; ; Y 1 , d ) f ( Y 2 , 1 ; Y 2 , 2 ; ; Y 2 , d ) : f ( Y n , 1 ; Y n , 2 ; ; Y n , d )
where f is the objective function, Y i j displays the value of the j-th dimension of the i-th prey, and F O A is the matrix for storing each prey’s fitness. A male jackal (MJ) is the most suitable, and a female jackal (FMJ) is the second most suitable. The jackal couple finds the appropriate prey location.

4.1.2. Searching for Prey (Exploration Stage)

With their remarkable capability to detect and pursue prey, jackals can usually track down food successfully. Nevertheless, there are instances when their attempts fail, and the potential prey evades capture, prompting the jackals to give up and search for alternative sources of sustenance. During hunts, the MJ takes the lead, while the FMJ follows closely behind, and the mathematically modelled jackal pairs hunt as follows:
Y 1 ( t ) = Y M ( t ) E . Y M ( t ) r l . P r e y ( t )
Y 2 ( t ) = Y F M ( t ) E . Y F M ( t ) r l . P r e y ( t )
where t represents the current iteration, P r e y ( t ) is the vector indicating the prey’s position. In contrast, Y M ( t ) and Y F M ( t ) are the positions of the MJ and FMJ, respectively. The revised positions of MJ and FMJ in relation to the prey are represented by Y 1 ( t ) and Y 2 ( t ) . The prey’s evasive energy E is computed as:
E = E 1 E 0
E 0 = 2 r 1
E 1 = c 1 ( 1 ( t / T ) )
E 0 depicts the beginning state of the prey’s energy, while E 1 represents the prey’s declining energy, where r is any random value between 0 and 1.
T stands for the max iteration number and c 1 is a constant value of 1.5. E 1 decreases linearly across iterations, from 1.5 to 0. In Equations (5a) and (5b), the distance between the jackal and the prey is calculated by | Y ( t ) r l . P r e y ( t ) | . Depending on how well the prey manages to evade the jackal, this distance is either added to or deducted from its present location. The vector of random numbers r l in Equations (5a) and (5b) represents the Lévy movement and is based on the Lévy flight. Prey is multiplied by r l to imitate Lévy-style prey movement, which is comparable to MPA [41] and is computed as follows:
r l = 0.05 L F ( y )
L F is the Lévy flight function, which is calculated as follows:
L F ( y ) = 0.01 × ( μ × σ ) / v ( 1 / β ) ; σ = Γ ( 1 + β ) × sin ( π β / 2 ) Γ 1 + β 2 × β × 2 β 1 2 1 / β
where β is constant set to 1.5 and u, v are random values inside of ( 0 , 1 ) . The jackal positions are updated by averaging Equations (5a) and (5b), which results in the following:
Y ( t + 1 ) = Y 1 ( t ) + Y 2 ( t ) 2

4.1.3. Tracking and Pouncing the Prey (Exploitation Stage)

As prey are pursued by jackals, their evasive energy declines, leading to the eventual encirclement of the prey by a pair of jackals identified in an earlier phase. Once encircled, the prey is attacked and consumed by the jackals. The following mathematical model is a representation of the hunting behaviour of male and female jackals that hunt in pairs, which is as follows:
Y 1 ( t ) = Y M ( t ) E . r l . Y M ( t ) P r e y ( t )
Y 2 ( t ) = Y F M ( t ) E . r l . Y F M ( t ) P r e y ( t )
where P r e y ( t ) is the position vector of the prey during the current iteration t, and Y M ( t ) and Y F M ( t ) indicate the position of the MJ and FMJ. The updated MJ and FMJ positions in relation to the prey are represented by Y 1 ( t ) and Y 2 ( t ) . Equation (6a) determines the prey’s evading energy, or E. The jackal positions are updated in accordance with Equation (9).
The purpose of r l in Equations (10a) and (10b) is to allow for arbitrary behavior in the exploitation stage, favoring exploration and avoiding local optima. Equation (7) is used to determine r l . In the final iterations, this component aids in avoiding local optima sluggishness.
As a result of jackals moving closer to the prey, the factor can be carefully considered. Typically, natural obstacles stand in the way of jackals’ proper and swift movement toward their prey. This is the goal of r l during the exploitation stage.

4.1.4. Switching from Exploration to Exploitation

The escape energy of the prey is utilized in the conventional GJO algorithm to transition from exploration to exploitation. Throughout avoiding behavior, prey energy significantly decreases. In light of this, Equation (6a) is used to represent the evasive energy. Every repetition, the initial energy E 0 deviates arbitrarily from the range of 1 to 1. The prey is physically waning when E 0 value decreases from 0 to 1 , but when E 0 value increases from 0 to 1, it indicates an improvement in the strength of prey.
According to Figure 2, the altering avoiding energy E decreases over the iteration procedure. When | E | > 1 , jackal partners hunt for prey that is exploring in different areas, and when | E | < 1 , the jackal attacks the prey and engages in predation, as depicted in Figure 1.
To sum up, the conventional GJO search procedure starts with the random generation of a population of prey (possible solutions). MJ and FMJ hunting couples calculate the location of the prey during iterations. Each prospective member of the population updates their separation from the jackal pair. To emphasize exploration and exploitation, the E 1 parameter is decreased from 1.5 to 0, accordingly. When E > 1 , the hunting pair of golden jackals strays from their victim, and when E < 1 , it gathers at the prey. The conventional GJO algorithm is finally completed by satisfying an end criterion. Algorithm 1 presents the conventional GJO algorithm’s pseudo-code.
Algorithm 1 Conventional Golden Jackal Optimization
Require: The size of population N p o p , solution dimension N d i m , the max number of iterations T m a x , lower and upper bounds L b , U b , the fitness function, the golden jackal G J , p r e y , etc.
Ensure: The best solution found in the search process
 1: 
Initializing the population through random mechanism
 2: 
While ( t < T m a x )
 3:
   Calculate the fitness values of preys
 4:
    Y 1 = best prey (Male Jackal position)
 5:
    Y 2 = second best prey (Female Jackal Position)
 6:
   for (each prey)
 7:
      Update the evading energy E using Equations (6a), (6b) and (6c)
 8:
      Update r l using Equations (7) and (8)
 9:
      if  ( | E | > = 1 ) // (Exploration phase)
10:
         Update the prey position using Equations (5b), (5a) and (9)
11:
      if  ( | E | < 1 ) //(Exploitation phase)
12:
         Update the prey position using Equations (10a), (10b) and (9)
13:
   end for
14:
    t = t + 1
15: 
end While
16: 
return Y 1

4.2. The Proposed IBGJO

This section introduces the enhancement factors in the proposed IBGJO algorithm, including the random population initialization strategy based on the Chaotic Tent map, the optimal location update mechanism based on cosine similarity, and the sigmoid function used to discretization the continuous solution space problem. Finally, the complexity of the IBGJO algorithm was analyzed.

4.2.1. Chaotic Tent Map for Initiate Population

In the conventional GJO algorithm, initial population information is generated randomly, which can pose difficulties in retaining population diversity and hinder the algorithm’s effectiveness in achieving the optimal solution. In contrast, the chaotic tent map (CTM) mechanism is characterized by randomness, ergodicity, and regularity. It can be used either to generate the initial population or as a perturbation during the optimization process [37,42]. This approach overcomes the limitation of the algorithm becoming trapped in a suboptimal local solution, thereby improving its search efficiency compared to the original algorithm. The CTM mechanism is described as Algorithm 2.
Algorithm 2 Chaotic Tent Map (CTM) Mechanism
Define and initialize the related parameters: the size of population N p o p , solution dimension N d i m , chaotic tent map threshold a, low boundaries l b and up boundaries u b , respectively.
 1: 
For  i = 1 to N p o p
 2:
   For  j = 1 to N d i m
 3:
      If  r a n d < a
 4:
          x i , j = r a n d a
 5:
      Else
 6:
          x i , j = a · ( 1 r a n d )
 7:
    x i = l b + x i · ( u b l b )
 8: 
return m e a n of x
 9: 
For  i = 1 to N p o p
10:
   For  j = 1 to N d i m
11:
      If  x i , j < m e a n
12:
          x i , j = 0
13:
      Else
14:
          x i , j = 1
15: 
return x
where a is the tent map’s call threshold, generally set as 0.5. In IBGJO, we use a CTM as the initialization mechanism. Considering the different dimensions of datasets, we provide Hillvalley in 28 datasets as an example of population initialization. As shown in Figure 3, the number of golden jackals in the population is 20, and the dimension is 100. And in Figure 3, the points labelled as random population initialization are denoted in red, while the points labelled as CTM population initialization are represented in blue. As can be seen, compared with the random mechanism, the CTM mechanism has good distribution and randomness. Therefore, the initialized population is more evenly distributed in the search space, which is more conducive to the algorithm’s optimization efficiency and solution accuracy.

4.2.2. Cosine Similarity for Position Update

The conventional GJO algorithm (Algorithm 3) updates the position of jackals by Equation (9) during the iteration process, equivalent to using the mean as a more optimal solution. Although this method can ensure the smoothness of jackal position updates, it has some drawbacks. The most obvious flaw is that it does not consider the correlation between different features. When there is a correlation between features, using the mean update mechanism may lead to some features being overemphasized or ignored, thereby affecting the model’s performance. In addition, when the data distribution is uneven, using the mean update mechanism may lead to poor prediction performance of the model for specific data. Therefore, we propose cosine similarity for positions updating of FMJ and MJ. Compared with the mean update mechanism, the advantage of using cosine similarity as the update mechanism is that it can consider the correlation between different features, thus updating model parameters more accurately. In addition, cosine similarity is not affected by vector length and data distribution and is suitable for high-dimensional data [43]. The mathematical model of cosine similarity is defined as follows:
C o s s i m ( Y 1 ( t ) , Y 2 ( t ) ) = Y 1 ( t ) · Y 2 ( t ) Y 1 ( t ) Y 2 ( t )
where the Y 1 ( t ) and Y 2 ( t ) represent the position of FMJ and MJ, respectively, and the · means dot product. | | Y 1 ( t ) | | and | | Y 2 ( t ) | | represent the lengths of FMJ and MJ, respectively. The value range of C o s s i m ( Y 1 ( t ) , Y 2 ( t ) ) is [−1, 1]. In this paper, we improve the cosine similarity between golden jackal pairs, using the absolute value as the weight of position update, which is defined as follows:
Y ( t + 1 ) = Y 1 ( t ) × | C o s s i m ( Y 1 ( t ) , Y 2 ( t ) ) | + Y 2 ( t ) × ( 1 | C o s s i m ( Y 1 ( t ) , Y 2 ( t ) ) | )
Algorithm 3 Improved Binary Golden Jackal Optimization
Require: The size of population N p o p , solution dimension N d i m , the max number of iterations T m a x , lower and upper bounds L b , U b , the fitness function, the golden jackal G J , p r e y , etc.
Ensure: The best solution found in the search process
 1: 
Initializing the population through chaotic tent mechanism by Algorithm 2
 2: 
While ( t < T m a x )
 3:
   Calculate the fitness values of preys
 4:
    Y 1 = best prey (Male Jackal position)
 5:
    Y 2 = second best prey (Female Jackal Position)
 6:
   for (each prey)
 7:
      Update the evading energy E using Equations (6a), (6b) and (6c)
 8:
      Update r l using Equations (7) and (8)
 9:
      if  ( | E | > = 1 ) // (Exploration phase)
10:
         Update the prey position using Equations (5b), (5a) and (12)
11:
      if  ( | E | < 1 ) //(Exploitation phase)
12:
         Update the prey position using Equations (10a), (10b) and (12)
13:
   end for
14:
    t = t + 1
15: 
end While
16: 
return Y 1

4.2.3. Binary Mechanism Sigmoid

The solutions in conventional GJO are continuous and can be updated using the Equations (5a), (5b), (10a) and (10b) directly. However, the solution space of the formulated feature selection problem is discrete, which cannot be handled by conventional GJO. Therefore, it is suitable for feature selection problems by introducing a binary mechanism to map the solutions from continuous to discrete space. For the solution mappings in this work, the commonly used S-shaped transfer function [44], i.e., the Sigmoid function, is applied to conventional GJO and IBGJO. The details of this function are as follows. Moreover, the binary mechanism is elucidated in Equations (13) and (14) as follows:
x s i g = 1 1 + e x ,
x b i n a r y = 1 , N r a n d o m x s i g 0 , N r a n d o m > x s i g
where x b i n a r y is the converted binary solution of the feature selection problem, and N r a n d o m is a random number used as the threshold. Figure 4 presents the binary mechanism that we used in this paper.

4.3. Feature Selection Based on IBGJO

A solution could be viewed as a golden jackal for the formulated feature selection problem when employing the suggested IBGJO. Consequently, the answer could be stated as follows:
g = ( G 1 , G 2 , G 3 , G N d i m )
where N d i m represents the number of features while N p o p is the number of individuals, thus, the IBGJO population is expressed as follows:
p o p = g 1 g 2 g N p o p = G 1 1 G 2 1 G 3 1 G N d i m 1 G 1 2 G 2 2 G 3 2 G N d i m 2 G 1 N p o p G 2 N p o p G 3 N p o p G N d i m N p o p

4.4. Computational Complexity

The complexity of the conventional GJO algorithm depends on various factors, including the size of the individuals N p o p , and the number of iterations T m a x . The exploration phase or exploitation phase is performed in each iteration. Therefore, the overall time complexity of conventional GJO consists of the exploration and exploitation phase. Thus, the overall time complexity of conventional GJO is given as follows:
O ( G J O ) = O ( T m a x ( O ( e x p l o r a t i o n ) + O ( e x p l o r a t i o n ) ) ) = O ( T m a x ( N p o p · N d i m + N p o p · N d i m ) ) = O ( T m a x · N p o p · N d i m )
Since the structure of the proposed IBGJO is similar to conventional GJO, therefore, the computational complexity of IBGJO is also determined to be O ( T m a x · N p o p · N d i m ) . As a result, for a given feature selection problem, IBGJO does not require noticeably more computation time than conventional GJO, as both conventional GJO and IBGJO algorithms possess equivalent computational complexity. Notably, the average execution time of IBGJO in the experimental results is better than that of conventional GJO; this may be attributed to the enhanced factors employed in IBGJO, which will help improve the searchability of IBGJO and promote its fast convergence.

5. Experiments and Analysis

In this section, we conduct tests to evaluate the performance of the proposed IBGJO algorithm for dealing with feature selection problems. First, the datasets and setups used in the experiments are introduced. Then, the test results obtained by IBGJO and several comparison algorithms are presented and analyzed. Moreover, several other algorithms are selected for comparison.

5.1. Datasets and Setup

In this work, we provide the datasets used in this article and the parameter settings for the experiment.

5.1.1. Benchmark Datasets

This section introduces the benchmark datasets used in different algorithms’ evaluations and parameter setups. Due to the fact that the UCI dataset covers multiple fields, such as Life, Social, Physical and so on, many research works use the UCI dataset as the benchmark data. For example, 10, 14, 16 and 20 datasets in the UCI dataset were respectively selected as experimental data in [21,45,46,47]. Therefore, the datasets used in our experiments refer to some datasets in their work and has been expanded to 28 datasets. By using these well-known datasets, we intended to facilitate comparisons with existing algorithms and provide a basis for future research. The primary information of these datasets is shown in Table 1.

5.1.2. Experiment Setup

We compare IBMRFO with several other algorithms for feature selection experiments, including BCS, BGWO, BHBA, BMPA, BGJO, and IBGJO. It should be noted that all algorithms use the exact binary mechanisms. At the same time, BGJO is a binary version of the conventional GJO algorithm. IBGJO parameters are based on those of the conventional GJO algorithm, which has only one adaptive coefficient vector. Unlike other algorithms, conventional GJO and IBGJO require no additional tuning. The critical parameter choices for these algorithms are presented in Table 2, with specific values based on prior evidence of consistently strong performance in the literature for each algorithm, enabling effective feature comparison.
Moreover, because both the proposed IBGJO and these comparison algorithms are meta-heuristics, the size of the population and the number of iterations directly impact them. To guarantee that the comparison is fair, the population size and the number of algorithm iterations must be consistent. The population size and iteration count for each algorithm in this study are set to 20 and 200, respectively. Additionally, to prevent the experiment’s random bias, each algorithm is independently performed 30 times in these chosen datasets, as suggested by the central limit theorem. The experiment’s Intel(R) Core(R) I9-12900KF CPU and 64 GB of RAM were employed. Using Python 3.9.12 and the KNN [48] (k = 5) based on Euclidean distance measurement, we put the trials into practice. It is worth noting that a common approach has been employed in several previous works where 80% of the instances are used for training purposes, while the remaining instances are reserved for testing. Moreover, in the fitness function α and β are set to 0.99 and 0.01 , respectively.

5.2. Feature Selection Results

This section presents the feature selection results of various algorithms in terms of average fitness function value, convergence speed, average accuracy, and average CPU time. Also, the best results are shown in bold.

5.2.1. Performance Evaluation

To explicitly demonstrate the performance of various algorithms, the fitness function values achieved by those algorithms are shown in Table 3. Table 3 shows the numerical statistical results of each dataset’s average fitness function value and standard deviation (std) of different algorithms. For the average fitness values on 28 datasets, BCS, BGWO, BHBA, BMPA, BGJO, and IBGJO, they achieved the best performance on 3, 3, 5, 7, 9, and 14 datasets, respectively. This demonstrates our conjecture that BGJO may have a good exploration ability but lacks exploitation performance. Thus, by introducing the improved factors to conventional BGJO, the proposed improvement factors are practical. Compared with conventional BGJO, IBGJO has an advantage on average fitness value in 21 datasets. Moreover, IBGJO obtains the best stds of fitness values in 11 datasets, which means that IBGJO is more stable than others regarding feature selection.
Due to space restrictions, many such figures are divided into three parts, and each curve is taken from the 15th test. The convergence rates of various algorithms used in the optimization processes are shown in Figure 5, Figure 6 and Figure 7. These figures demonstrate that the proposed IBGJO exposes the best curves on 20 datasets and has the best convergence capability compared to all other comparison algorithms. Overall, the proposed IBGJO performs better than other comparison algorithms for solving the formulated feature selection problem. Note that the effectiveness of different improved factors is further verified and discussed in Section 5.3.

5.2.2. Features Selection Accuracy of Algorithms

The feature selection accuracy obtained by various algorithms is shown in Table 4. The IBGJO algorithm achieves the best average accuracies of feature selection results on 14 datasets. Moreover, IBGJO obtains better accuracy than conventional BGJO in 21 datasets. Thus, compared with other algorithms, the IBGJO algorithm has the best performance in terms of feature selection accuracies on these selected datasets. The reason could be that the improved factors can balance the exploration and exploitation abilities, improving the algorithm’s performance. However, it is crucial to recognize that achieving optimal results for accuracy and the number of selected features is a challenging tradeoff that varies across datasets.
Therefore, it can be concluded that the proposed IBGJO algorithm displays superior overall performance in feature selection across the selected datasets as compared to the other algorithms according to Table 4 and Table 5.

5.2.3. Number of Selected Features

The counts of the selected features from the datasets acquired by various techniques are displayed in Table 5. Similar to the accuracy results, these tables likewise display the outcomes of numerical statistics. BMPA obtains the best average number of selected features in the majority of the datasets (20 of 28), which may be regarded as the best results in the tests compared to other algorithms. This is shown in Table 5. Meanwhile, the number of features of IBGJO has an advantage in 15 datasets compared to that of BGJO. It is important to note that there exists a tradeoff between accuracy and the number of selected features, making it challenging to achieve optimal results for both objectives in each dataset.

5.2.4. Algorithm Execution Time

The average running time of all algorithms is shown in Table 6. Based on the data presented in Table 6, it is evident that the IBGJO has an advantage in algorithm execution time. IBGJO is experimented on 28 datasets and compares the performance of different feature selection algorithms. Among these 28 datasets, our algorithm converged in the least average time on 19 datasets. This means that our algorithm has higher efficiency and faster convergence and can select the best subset of features in less time, thereby improving the performance of the model. This result shows that our algorithm has higher practicability and feasibility in practical applications.

5.3. Effectiveness of the Improved Factors

In this section, we conduct experiments to evaluate the effectiveness of the introduced factors in IBGJO. To observe whether these factors can impove the performance of BGJO, we use BGJO, BGJO with CTM mechanism (T-BGJO), BGJO with CS (C-BGJO), and BGJO both with CTM mechanism and CS together (IBGJO) to solve the formulated feature selection problem, respectively. The tests are also conducted on the nine selected datasets: Arrhythmia, Diabets, Heart-StatLog, Ionosphere, Krvskp, Lung, Parkinsons, Thyroid and WDBC. The numerical findings generated by these abovementioned algorithms are listed in Table 7. Overall, all algorithms obtain the same results on the Diabets dataset. This may be because this dataset has the lowest solution dimension, making it easy to solve. Furthermore, the convergence rates of different improvement factors used in optimization are shown in Figure 8. The remaining outcomes are discussed in detail as follows.

5.3.1. Effectiveness of the Chaotic Tent Map (CTM) Mechanism

It can be seen from Table 7 that compared with the traditional BGJO algorithm, the T-BGJO algorithm does not have many advantages in fitness function value or classification accuracy. However, T-BGJO could efficiently select a fewer number of features than BGJO. Therefore, CTM has the advantage in feature number over other improved factors with cosine similarity that can help IBGJO obtain better performance.

5.3.2. Effectiveness of Cosine Similarity Position Update

In most datasets, especially medium-dimensional datasets, C-BGJO outperforms BGJO and T-BGJO in terms of accuracy of the fitness function values obtained, as shown in Table 7. This is due to the ability of the proposed CS position updating system to adaptively modify the searching scope to enhance BGJO’s exploration capabilities. Note that, compared with the location update mechanism of the conventional BGJO, the CS requires additional calculation in each iteration. However, the searchability of IBGJO will be more robust with CS. Therefore, it could increase the convergence time.

5.3.3. Effectiveness of CTM and CS

It can be seen from Table 7 that the CTM mechanism effectively improves the representativeness and diversity of the initial population through a chaotic tent map and prevents the algorithm from falling into local optimum. Using CS as the jackal position update strategy in the golden jackal optimization algorithm can speed up the convergence speed of the algorithm and help IBGJO to converge faster. The combination of the two enhancement factors can effectively improve the algorithmic performance of BGJO in the field of feature selection.
To summarize, incorporating the two enhancement factors and binary mechanisms has effectively elevated the performance of the conventional BGJO algorithm and made it well-suited for feature selection. Furthermore, these components exhibit a complementary relationship. For instance, utilizing the CTM mechanism on small-sized datasets may cause the algorithm to encounter local optima frequently. Therefore, incorporating the CS is essential to address this problem.

5.4. Limitation of IBGJO

Although the experimental simulation results show that the proposed IBGJO algorithm outperforms some comparative algorithms, it still has some limitations. One limitation of the IBGJO algorithm is its sensitivity to parameter settings, requiring careful tuning for optimal performance. Additionally, the scalability of IBGJO to large-scale or high-dimensional datasets is a concern, as its computational complexity may become prohibitive. The generalization of IBGJO to different domains and problem types needs further exploration, as specific data characteristics may influence its performance. Furthermore, the interpretability of the selected feature subsets may not be guaranteed, as the algorithm prioritizes classification performance over intuitive feature combinations. The effectiveness and applicability of IBGJO can be improved by reducing the dimensionality of the feature set on the original dataset and then using the IBGJO algorithm for feature selection and optimizing the algorithm parameters.

6. Conclusions

The focus of this research work is to improve the classification performance of machine learning by addressing the issue of feature selection. An improved version of the BGJO algorithm, referred to as the IBGJO algorithm, is proposed to solve the feature selection problem. The IBGJO algorithm incorporates the CTM mechanism, CS location updating mechanism, and S-shape binary mechanism, designed to improve the performance of conventional BGJO and make it suitable for feature selection problems. Utilizing these improved factors allows the algorithm to balance its exploitation and exploration abilities while maintaining population diversity.
By using the improved factors, we can balance the development of the algorithm and its ability to explore different options while maintaining diversity within the population. We conducted experiments to test our proposed algorithm, IBGJO, on 28 well-known datasets and found that it outperformed other state-of-the-art algorithms such as BCS, BGWO, BHBA, BMPA and BGJO in terms of feature selection. We also evaluated the effectiveness of the improvement factors. We found that they helped to enhance the performance of the conventional golden jackal optimization algorithm. In the future, we plan to propose additional ways to update the location of the population and combine them with other evolutionary algorithms to tackle a broader range of optimization problems.

Author Contributions

Methodology, K.Z.; software, F.M. and J.J.; Validation, G.S. and K.Z.; writing—original draft preparation, K.Z. and G.S.; writing review and editing, G.S., Y.L. and K.Z.; visualization, K.Z. and J.J.; supervision, Y.L. and S.G.; funding acquisition, Y.L. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported in part by the National Natural Science Foundation of China (62172186, 62002133, 61872158, 61806083), in part by the Science and Technology Development Plan Project of Jilin Province (20190701019GH, 20190701002GH, 20210101183JC, 20210201072GX, 20220101101JC), and in part by the Young Science and Technology Talent Lift Project of Jilin Province (QT202013).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AccAverage accuracy
CSCosine similarity
EAEvolutionary algorithm
FSFeature selection
LDLinear dichroism
MJMale golden jackal
FMJFemale golden jackal
StdStandard deviation

References

  1. Cichos, F.; Gustavsson, K.; Mehlig, B.; Volpe, G. Machine learning for active matter. Nat. Mach. Intell. 2020, 2, 94–103. [Google Scholar]
  2. Alber, M.; Buganza Tepole, A.; Cannon, W.R.; De, S.; Dura-Bernal, S.; Garikipati, K.; Karniadakis, G.; Lytton, W.W.; Perdikaris, P.; Petzold, L.; et al. Integrating machine learning and multiscale modeling—Perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit. Med. 2019, 2, 115. [Google Scholar]
  3. Proto, S.; Di Corso, E.; Ventura, F.; Cerquitelli, T. Useful ToPIC: Self-tuning strategies to enhance latent Dirichlet allocation. In Proceedings of the 2018 IEEE International Congress on Big Data (BigData Congress), San Francisco, CA, USA, 2–7 July 2018; pp. 33–40. [Google Scholar]
  4. Faris, H.; Heidari, A.A.; Ala’M, A.Z.; Mafarja, M.; Aljarah, I.; Eshtay, M.; Mirjalili, S. Time-varying hierarchical chains of salps with random weight networks for feature selection. Expert Syst. Appl. 2020, 140, 112898. [Google Scholar]
  5. Yu, L.; Liu, H. Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 2004, 5, 1205–1224. [Google Scholar]
  6. Daraio, E.; Di Corso, E.; Cerquitelli, T.; Chiusano, S. Characterizing air-quality data through unsupervised analytics methods. In Proceedings of the European Conference on Advances in Databases and Information Systems, Barcelona, Spain, 4–7 August 2018; pp. 205–217. [Google Scholar]
  7. Solorio-Fernández, S.; Carrasco-Ochoa, J.A.; Martínez-Trinidad, J.F. A review of unsupervised feature selection methods. Artif. Intell. Rev. 2020, 53, 907–948. [Google Scholar]
  8. Li, J.; Kang, H.; Sun, G.; Feng, T.; Li, W.; Zhang, W.; Ji, B. IBDA: Iimproved binary dragonfly algorithm with evolutionary population dynamics and adaptive crossover for feature selection. IEEE Access 2020, 8, 108032–108051. [Google Scholar]
  9. Jović, A.; Brkić, K.; Bogunović, N. A review of feature selection methods with applications. In Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 25–29 May 2015; pp. 1200–1205. [Google Scholar]
  10. Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar]
  11. Aličković, E.; Subasi, A. Breast cancer diagnosis using GA feature selection and Rotation Forest. Neural Comput. Appl. 2017, 28, 753–763. [Google Scholar]
  12. Huda, R.K.; Banka, H. Efficient feature selection and classification algorithm based on PSO and rough sets. Neural Comput. Appl. 2019, 31, 4287–4303. [Google Scholar]
  13. Osanaiye, O.; Cai, H.; Choo, K.K.R.; Dehghantanha, A.; Xu, Z.; Dlodlo, M. Ensemble-based multi-filter feature selection method for DDoS detection in cloud computing. EURASIP J. Wirel. Commun. Netw. 2016, 2016, 130. [Google Scholar]
  14. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic algorithms on feature selection: A survey of one decade of research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar]
  15. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
  17. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar]
  18. Too, J.; Liang, G.; Chen, H. Memory-based Harris hawk optimization with learning agents: A feature selection approach. Eng. Comput. 2021, 38, 4457–4478. [Google Scholar]
  19. Hussain, K.; Neggaz, N.; Zhu, W.; Houssein, E.H. An efficient hybrid sine-cosine Harris hawks optimization for low and high-dimensional feature selection. Expert Syst. Appl. 2021, 176, 114778. [Google Scholar]
  20. Hegazy, A.E.; Makhlouf, M.; El-Tawel, G.S. Improved salp swarm algorithm for feature selection. J. King Saud-Univ.-Comput. Inf. Sci. 2020, 32, 335–344. [Google Scholar]
  21. Dadaneh, B.Z.; Markid, H.Y.; Zakerolhosseini, A. Unsupervised probabilistic feature selection using ant colony optimization. Expert Syst. Appl. 2016, 53, 27–42. [Google Scholar]
  22. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar]
  23. Hou, Y.; Li, J.; Yu, H.; Li, Z. BIFFOA: A novel binary improved fruit fly algorithm for feature selection. IEEE Access 2019, 7, 81177–81194. [Google Scholar]
  24. Sayed, S.A.F.; Nabil, E.; Badr, A. A binary clonal flower pollination algorithm for feature selection. Pattern Recognit. Lett. 2016, 77, 21–27. [Google Scholar]
  25. Baldo, A.; Boffa, M.; Cascioli, L.; Fadda, E.; Lanza, C.; Ravera, A. The polynomial robust knapsack problem. Eur. J. Oper. Res. 2023, 305, 1424–1434. [Google Scholar]
  26. Aghdam, M.H.; Ghasem-Aghaee, N.; Basiri, M.E. Text feature selection using ant colony optimization. Expert Syst. Appl. 2009, 36, 6843–6853. [Google Scholar]
  27. Nakamura, R.Y.M.; Pereira, L.A.M.; Costa, K.A.; Rodrigues, D.; Papa, J.P.; Yang, X.-S. BBA: A binary bat algorithm for feature selection. In Proceedings of the Graphics, Patterns and Images: 25th SIBGRAPI Conference, SIBGRAPI 2012, Ouro Preto, Brazil, 22–25 August 2012; pp. 291–297. [Google Scholar]
  28. Li, W.; Kang, H.; Feng, T.; Li, J.; Yue, Z.; Sun, G. Swarm Intelligence-Based Feature Selection: An Improved Binary Grey Wolf Optimization Method. In Proceedings of the Knowledge Science, Engineering and Management: 14th International Conference, KSEM 2021, Tokyo, Japan, 14–16 August 2021; pp. 98–110. [Google Scholar]
  29. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2010, 9, 727–745. [Google Scholar]
  30. Al-Tashi, Q.; Kadir, S.J.A.; Rais, H.M.; Mirjalili, S.; Alhussian, H. Binary optimization using hybrid grey wolf optimization for feature selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar]
  31. Arora, S.; Anand, P. Binary butterfly optimization approaches for feature selection. Expert Syst. Appl. 2019, 116, 147–160. [Google Scholar]
  32. Xue, Y.; Xue, B.; Zhang, M. Self-adaptive particle swarm optimization for large-scale feature selection in classification. In ACM Transactions on Knowledge Discovery from Data (TKDD); ACM Press: New York, NY, USA, 2019; Volume 13, pp. 1–27. [Google Scholar]
  33. Zhang, Y.; Cheng, S.; Shi, Y.; Gong, D.w.; Zhao, X. Cost-sensitive feature selection using two-archive multi-objective artificial bee colony algorithm. Expert Syst. Appl. 2019, 137, 46–58. [Google Scholar]
  34. Tao, Z.; Huiling, L.; Wenwen, W.; Xia, Y. GA-SVM based feature selection and parameter optimization in hospitalization expense modeling. Appl. Soft Comput. 2019, 75, 323–332. [Google Scholar]
  35. Zhang, J.; Luo, Z.; Li, C.; Zhou, C.; Li, S. Manifold regularized discriminative feature selection for multi-label learning. Pattern Recognit. 2019, 95, 136–150. [Google Scholar]
  36. Dey, A.; Chattopadhyay, S.; Singh, P.K.; Ahmadian, A.; Ferrara, M.; Sarkar, R. A hybrid meta-heuristic feature selection method using golden ratio and equilibrium optimization algorithms for speech emotion recognition. IEEE Access 2020, 8, 200953–200970. [Google Scholar]
  37. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. 2020, 88, 105946. [Google Scholar]
  38. Anter, A.M.; Ali, M. Feature selection strategy based on hybrid crow search optimization algorithm integrated with chaos theory and fuzzy c-means algorithm for medical diagnosis problems. Soft Comput. 2020, 24, 1565–1584. [Google Scholar]
  39. Devi, R.M.; Premkumar, M.; Kiruthiga, G.; Sowmya, R. IGJO: An Improved Golden Jackel Optimization Algorithm Using Local Escaping Operator for Feature Selection Problems. Neural Process. Lett. 2023, 14, 1–89. [Google Scholar]
  40. Abdollahzadeh, B.; Gharehchopogh, F.S. A multi-objective optimization algorithm for feature selection problems. Eng. Comput. 2022, 38, 1845–1863. [Google Scholar]
  41. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar]
  42. Sayed, G.I.; Tharwat, A.; Hassanien, A.E. Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Appl. Intell. 2019, 49, 188–205. [Google Scholar]
  43. Zhang, K.; Liu, Y.; Mei, F.; Jin, J.; Wang, Y. Boost Correlation Features with 3D-MiIoU-Based Camera-LiDAR Fusion for MODT in Autonomous Driving. Remote Sens. 2023, 15, 874. [Google Scholar]
  44. Ghosh, K.K.; Guha, R.; Bera, S.K.; Kumar, N.; Sarkar, R. S-shaped versus V-shaped transfer functions for binary Manta ray foraging optimization in feature selection problem. Neural Comput. Appl. 2021, 33, 11027–11041. [Google Scholar]
  45. Wang, P.; Xue, B.; Liang, J.; Zhang, M. Multiobjective differential evolution for feature selection in classification. IEEE Trans. Cybern. 2021, 53, 4579–4593. [Google Scholar]
  46. Wang, P.; Xue, B.; Liang, J.; Zhang, M. Feature clustering-Assisted feature selection with differential evolution. Pattern Recognit. 2023, 140, 109523. [Google Scholar]
  47. Xu, H.; Xue, B.; Zhang, M. A duplication analysis-based evolutionary algorithm for biobjective feature selection. IEEE Trans. Evol. Comput. 2020, 25, 205–218. [Google Scholar]
  48. Deng, Z.; Zhu, X.; Cheng, D.; Zong, M.; Zhang, S. Efficient kNN classification algorithm for big data. Neurocomputing 2016, 195, 143–148. [Google Scholar]
Figure 1. The stages of golden jackal pair hunting.
Figure 1. The stages of golden jackal pair hunting.
Entropy 25 01128 g001
Figure 2. Attacking and searching for prey.
Figure 2. Attacking and searching for prey.
Entropy 25 01128 g002
Figure 3. Random and CTM mechanisms for initialize population, where red dot represents the random mechanism, and blue represents CTM mechanism.
Figure 3. Random and CTM mechanisms for initialize population, where red dot represents the random mechanism, and blue represents CTM mechanism.
Entropy 25 01128 g003
Figure 4. Sigmoid Binary transfer function.
Figure 4. Sigmoid Binary transfer function.
Entropy 25 01128 g004
Figure 5. Convergence rates obtained by different algorithms (Part 1).
Figure 5. Convergence rates obtained by different algorithms (Part 1).
Entropy 25 01128 g005aEntropy 25 01128 g005b
Figure 6. Convergence rates obtained by different algorithms (Part 2).
Figure 6. Convergence rates obtained by different algorithms (Part 2).
Entropy 25 01128 g006aEntropy 25 01128 g006b
Figure 7. Convergence rates obtained by different algorithms (Part 3).
Figure 7. Convergence rates obtained by different algorithms (Part 3).
Entropy 25 01128 g007aEntropy 25 01128 g007b
Figure 8. Convergence rate comparisons between conventional BGJO, T-BGJO, C-BGJO and IBGJO on different datasets.
Figure 8. Convergence rate comparisons between conventional BGJO, T-BGJO, C-BGJO and IBGJO on different datasets.
Entropy 25 01128 g008
Table 1. Benchmark datasets.
Table 1. Benchmark datasets.
No.DatasetInstancesFeaturesClasssesAttribute TypeArea
D1Arrhythmia45227816Categorical, Integer, RealLife
D2Breastcancer699105IntegerLife
D3BreastEW569302RealLife
D4Congress435162CategoricalSocial
D5Connectionist208602RealPhysical
D6Dermatology366346Categorical, IntegerLife
D7Diabets76887Categorical, IntegerComputer
D8German1000242Categorical, IntegerBusiness
D9HeartEW270132Categorical, RealLife
D10Heart-StatLog270132Categorical, RealLife
D11Hillvalley6061002RealN/A
D12Ionosphere351342Integer, RealLife
D13Krvskp3196362CategoricalGame
D14Low-res-spect5311029Integer, RealPhysical
D15Lung723267IntegerLife
D16Lung-Cancer32563IntegerLife
D17Lymphography148188CategoricalPhysical
D18Parkinsons1040262Integer, RealLife
D19Planning182132RealComputer
D20Sonar208602RealPhysical
D21Spect267222CategoricalLife
D22Steel-plates1941277Integer, RealPhysical
D23Thyroid7200213Categorical, RealLife
D24Tic-tac-toe95892CategoricalGame
D25WDBC569312RealLife
D26Wine178133Integer, RealPhysical
D27Zoo101167Categorical, IntegerLife
D28RNA-Seq802163844RealLife
Table 2. Key parameters of different algorithms.
Table 2. Key parameters of different algorithms.
No.AlgorithmParameters
1BCSDiscovery probability = 0.25, α = 1
2BGWO α = [2, 0]
3HBA β = 6, C = 2, v e c f l a g = [1, −1]
4MPAP = 0.5, F A D S = 0.2
5BGJO e 0 = [−1, 1]
6IBGJO e 0 = [−1, 1]
Table 3. Average fitness function values obtained by different algorithms.
Table 3. Average fitness function values obtained by different algorithms.
DatastetBCSBGWOBHBABMPABGJOIBGJO
FitnessFitnessFitnessFitnessFitnessFitness
D10.3526 ± 0.00420.3440 ± 0.00800.3489 ± 0.00520.3491 ± 0.00500.3477 ± 0.00410.3482 ± 0.0038
D20.0268 ± 0.00000.0297 ± 0.00290.0268 ± 0.00000.0268 ± 0.00000.0268 ± 0.00000.0272 ± 0.0000
D30.0452 ± 0.00150.0505 ± 0.00540.0440 ± 0.00060.0433 ± 0.00090.0437 ± 0.00060.0433 ± 0.0009
D40.0274 ± 0.00250.0309 ± 0.00450.0256 ± 0.00200.0259 ± 0.00280.0253 ± 0.00200.0269 ± 0.0018
D50.1427 ± 0.00550.1391 ± 0.01310.1382 ± 0.00720.1383 ± 0.00950.1378 ± 0.00700.1362 ± 0.0095
D60.0165 ± 0.00170.0175 ± 0.00320.0149 ± 0.00130.0169 ± 0.00170.0144 ± 0.00170.0139 ± 0.0017
D70.2557 ± 0.00000.2588 ± 0.00420.2557 ± 0.00000.2557 ± 0.00000.2557 ± 0.00000.2557 ± 0.0000
D80.2523 ± 0.00370.2572 ± 0.00760.2517 ± 0.00270.2535 ± 0.00320.2503 ± 0.00300.2505 ± 0.0035
D90.1629 ± 0.00660.1733 ± 0.01040.1590 ± 0.00520.1587 ± 0.00440.1543 ± 0.00400.1537 ± 0.0046
D100.1463 ± 0.00540.1595 ± 0.02380.1392 ± 0.00300.1414 ± 0.00490.1450 ± 0.00250.1440 ± 0.0024
D110.4113 ± 0.00310.3995 ± 0.00640.4089 ± 0.00390.4076 ± 0.00410.4088 ± 0.00420.4079 ± 0.0042
D120.1374 ± 0.00480.1326 ± 0.01120.1325 ± 0.00740.1267 ± 0.00560.1307 ± 0.00580.1293 ± 0.0070
D130.0361 ± 0.00390.0373 ± 0.00530.0336 ± 0.00240.0379 ± 0.00290.0325 ± 0.00250.0316 ± 0.0026
D140.1170 ± 0.00160.1148 ± 0.00350.1156 ± 0.00210.1154 ± 0.00190.1157 ± 0.00230.1148 ± 0.0018
D150.1316 ± 0.00690.1265 ± 0.01710.1230 ± 0.00930.1224 ± 0.00880.1216 ± 0.00980.1197 ± 0.0107
D160.0386 ± 0.01130.0379 ± 0.01160.0327 ± 0.00720.0312 ± 0.00610.0302 ± 0.00060.0301 ± 0.0005
D170.5696 ± 0.00880.5837 ± 0.01670.5618 ± 0.00690.5639 ± 0.01040.5651 ± 0.00550.5645 ± 0.0053
D180.0988 ± 0.01160.1129 ± 0.00710.0962 ± 0.01080.0879 ± 0.00770.0903 ± 0.01040.0882 ± 0.0085
D190.2752 ± 0.00540.2992 ± 0.02330.2719 ± 0.00110.2749 ± 0.00520.2716 ± 0.00000.2716 ± 0.0000
D200.1147 ± 0.00570.1094 ± 0.01370.1095 ± 0.00590.1092 ± 0.00800.1070 ± 0.00810.1035 ± 0.0078
D210.2766 ± 0.00670.2792 ± 0.01210.2707 ± 0.00760.2736 ± 0.00750.2678 ± 0.00710.2708 ± 0.0055
D220.3193 ± 0.03240.4235 ± 0.10620.3286 ± 0.03440.2953 ± 0.02720.2687 ± 0.01260.2731 ± 0.0140
D230.0276 ± 0.00190.0314 ± 0.00490.0248 ± 0.00140.0252 ± 0.00160.0239 ± 0.00140.0238 ± 0.0014
D240.1523 ± 0.00000.1541 ± 0.00980.1523 ± 0.00000.1523 ± 0.00000.1564 ± 0.00000.1564 ± 0.0000
D250.0462 ± 0.00180.0537 ± 0.00820.0443 ± 0.00070.0437 ± 0.00070.0440 ± 0.00060.0437 ± 0.0008
D260.0517 ± 0.00330.0621 ± 0.01280.0491 ± 0.00210.0494 ± 0.00210.0483 ± 0.00050.0483 ± 0.0004
D270.0534 ± 0.00900.0794 ± 0.01800.0463 ± 0.00610.0512 ± 0.00700.0442 ± 0.00470.0634 ± 0.0051
D280.6865  ± 0.00240.6634 ± 0.00450.6844 ± 0.00240.6826 ± 0.00290.6828 ± 0.00300.6817 ± 0.0.0027
Table 4. Classification accuracies and standard deviation achieved by different algorithms.
Table 4. Classification accuracies and standard deviation achieved by different algorithms.
DatastetBCSBGWOBHBABMPABGJOIBGJO
AccuracyAccuracyAccuracyAccuracyAccuracyAccuracy
D10.6501 ± 0.00430.6598 ± 0.00810.6539 ± 0.00530.6529 ± 0.00500.6551 ± 0.00420.6546 ± 0.0038
D20.9800 ± 0.00000.9771 ± 0.00280.9800 ± 0.00000.9800 ± 0.00000.9800 ± 0.00000.9786 ± 0.0000
D30.9594 ± 0.00100.9540 ± 0.00530.9598 ± 0.00040.9599 ± 0.00070.9598 ± 0.00050.9603 ± 0.0009
D40.9777 ± 0.00230.9748 ± 0.00410.9795 ± 0.00150.9790 ± 0.00250.9797 ± 0.00160.9777 ± 0.0015
D50.8617 ± 0.00540.8662 ± 0.01300.8663 ± 0.00730.8654 ± 0.00950.8668 ± 0.00690.8683 ± 0.0098
D60.9897 ± 0.00200.9900 ± 0.00310.9915 ± 0.00150.9892 ± 0.00180.9921 ± 0.00170.9923 ± .0018
D70.7468 ± 0.00000.7445 ± 0.00430.7468 ± 0.00000.7468 ± 0.00000.7468 ± 0.00000.7468 ± 0.0000
D80.7518 ± 0.00390.7476 ± 0.00770.7522 ± 0.00280.7497 ± 0.00340.7538 ± 0.00320.7535 ± 0.0037
D90.8394 ± 0.00590.8299 ± 0.00980.8430 ± 0.00480.8427 ± 0.00430.8473 ± 0.00380.8496 ± 0.0037
D100.8567 ± 0.00570.8444 ± 0.02350.8646 ± 0.00350.8617 ± 0.00550.8569 ± 0.00180.8575 ± 0.0019
D110.5907 ± 0.00310.6037 ± 0.00620.5931 ± 0.00390.5937 ± 0.00410.5932 ± 0.00420.5942 ± 0.0042
D120.8661 ± 0.00470.8715 ± 0.01060.8710 ± 0.00720.8758 ± 0.00540.8727 ± 0.00550.8738 ± 0.0066
D130.9699 ± 0.00400.9698 ± 0.00520.9725 ± 0.00240.9678 ± 0.00290.9736 ± 0.00240.9746 ± 0.0026
D140.8881 ± 0.00150.8910 ± 0.00350.8894 ± 0.00210.8887 ± 0.00220.8893 ± 0.00220.8902 ± 0.0018
D150.8733 ± 0.00700.8788 ± 0.01740.8821 ± 0.00950.8817 ± 0.00900.8833 ± 0.01000.8854 ± 0.0109
D160.9667 ± 0.01180.9675 ± 0.01150.9725 ± 0.00750.9733 ± 0.00620.9750 ± 0.00000.9750 ± 0.0000
D170.4302 ± 0.00870.4164 ± 0.01650.4380 ± 0.00690.4353 ± 0.01030.4344 ± 0.00560.4356 ± 0.0056
D180.9050 ± 0.01180.8905 ± 0.00720.9073 ± 0.01120.9155 ± 0.00800.9132 ± 0.01090.9153 ± 0.0090
D190.7275 ± 0.00570.7039 ± 0.02330.7312 ± 0.00130.7279 ± 0.00560.7316 ± 0.00000.7316 ± 0.0000
D200.8903 ± 0.00560.8963 ± 0.01360.8956 ± 0.00610.8949 ± 0.00810.8978 ± 0.00810.9016 ± 0.0079
D210.7265 ± 0.00660.7243 ± 0.01180.7322 ± 0.00770.7291 ± 0.00770.7353 ± 0.00710.7323 ± 0.0054
D220.6826 ± 0.03250.5773 ± 0.10760.6732 ± 0.03470.7060 ± 0.02730.7332 ± 0.01250.7289 ± 0.0140
D230.9765 ± 0.00180.9733 ± 0.00470.9791 ± 0.00140.9787 ± 0.00160.9800 ± 0.00120.9800 ± 0.0012
D240.8563 ± 0.00000.8543 ± 0.01030.8563 ± 0.00000.8563 ± 0.00000.8521 ± 0.00000.8521 ± 0.0000
D250.9585 ± 0.00150.9508 ± 0.00820.9596 ± 0.00000.9596 ± 0.00050.9598 ± 0.00040.9599 ± 0.0006
D260.9528 ± 0.00310.9428 ± 0.01250.9548 ± 0.00190.9546 ± 0.00210.9556 ± 0.00000.9556 ± 0.0000
D270.9524 ± 0.00900.9270 ± 0.01830.9597 ± 0.00650.9539 ± 0.00740.9618 ± 0.00500.9412 ± 0.0046
D280.3130 ± 0.00240.3379 ± 0.00460.3152 ± 0.00250.3160 ± 0.00290.3168 ± 0.00300.0.3179 ± 0.0028
Table 5. Number of selected features obtained by different algorithms with standard deviation.
Table 5. Number of selected features obtained by different algorithms with standard deviation.
DatastetBCSBGWOBHBABMPABGJOIBGJO
FeaturesFeaturesFeaturesFeaturesFeaturesFeatures
D1174.07 ± 3.05200.67 ± 10.34174.30 ± 8.14151.73 ± 11.51175.03 ± 8.78175.13 ± 7.79
D27.00 ± 0.007.07 ± 0.737.00 ± 0.007.00 ± 0.007.00 ± 0.006.00 ± 0.00
D314.93 ± 2.4314.67 ± 2.6112.57 ± 1.8211.03 ± 1.8711.80 ± 1.6911.87 ± 1.31
D48.60 ± 1.629.40 ± 1.438.47 ± 1.658.13 ± 1.508.37 ± 1.507.80 ± 1.06
D535.20 ± 4.3739.67 ± 4.1135.43 ± 3.1430.37 ± 3.1135.80 ± 3.7834.63 ± 3.81
D621.63 ± 2.6325.73 ± 2.1422.23 ± 1.8421.07 ± 2.1022.37 ± 2.0421.63 ± 2.24
D74.00 ± 0.004.73 ± 0.734.00 ± 0.004.00 ± 0.004.00 ± 0.004.00 ± 0.00
D815.83 ± 2.1917.53 ± 2.0015.30 ± 1.9913.73 ± 1.8815.87 ± 2.1315.53 ± 1.55
D95.10 ± 1.196.33 ± 1.304.60 ± 0.883.80 ± 0.794.07 ± 0.586.23 ± 1.61
D105.70 ± 0.907.17 ± 1.046.60 ± 0.805.90 ± 1.334.30 ± 1.213.90 ± 0.96
D1160.43 ± 6.3271.83 ± 4.3160.30 ± 4.4552.93 ± 5.3060.60 ± 4.5960.83 ± 4.18
D1216.53 ± 3.1918.23 ± 2.7816.20 ± 2.9112.70 ± 2.2515.77 ± 2.4514.73 ± 3.02
D1322.63 ± 2.1226.57 ± 2.4922.87 ± 2.0321.57 ± 2.7322.87 ± 1.8023.13 ± 2.50
D1461.77 ± 4.4768.37 ± 4.3161.03 ± 4.5652.53 ± 4.7960.67 ± 4.9561.03 ± 5.63
D15200.27 ± 11.05208.70 ± 14.09203.43 ± 8.02170.80 ± 11.77197.43 ± 11.71202.13 ± 8.95
D1631.53 ± 3.7432.03 ± 3.8230.57 ± 2.5627.13 ± 3.2030.70 ± 3.1430.03 ± 2.64
D179.97 ± 1.7010.77 ± 1.619.70 ± 1.328.77 ± 1.829.37 ± 1.3810.20 ± 1.42
D1810.87 ± 1.7710.37 ± 2.0110.27 ± 1.599.87 ± 1.8610.00 ± 1.8910.17 ± 1.86
D196.57 ± 0.507.20 ± 1.426.93 ± 0.256.60 ± 0.497.00 ± 0.007.00 ± 0.00
D2036.80 ± 2.7940.57 ± 3.8836.53 ± 3.9431.17 ± 2.8834.83 ± 3.2736.30 ± 3.59
D2112.83 ± 1.2713.87 ± 1.6112.40 ± 1.4512.00 ± 1.8312.63 ± 1.5012.77 ± 1.55
D2213.67 ± 2.0113.37 ± 1.9913.80 ± 2.6611.33 ± 1.7812.40 ± 1.7112.63 ± 1.97
D238.97 ± 1.2810.50 ± 1.208.63 ± 1.388.60 ± 0.958.67 ± 1.328.27 ± 1.20
D249.00 ± 0.008.93 ± 0.369.00 ± 0.009.00 ± 0.009.00 ± 0.009.00 ± 0.00
D2515.67 ± 2.3715.57 ± 2.4213.50 ± 2.0611.57 ± 1.7112.80 ± 1.6112.23 ± 1.50
D266.47 ± 0.767.03 ± 1.025.67 ± 0.605.83 ± 0.905.63 ± 0.675.53 ± 0.57
D2710.00 ± 1.5311.30 ± 1.4910.17 ± 1.249.00 ± 1.2110.27 ± 0.748.33 ± 1.58
D2810,531.2 ± 442.1012,951.73 ± 384.5510,605.6 ± 75.168971.07 ± 365.8710,548.87 ± 109.2410,511.37 ± 164.68
Table 6. Average CPU time and standard deviation occupied by different algorithms (/s).
Table 6. Average CPU time and standard deviation occupied by different algorithms (/s).
DatastetBCSBGWOBHBABMPABGJOIBGJO
TimeTimeTimeTimeTimeTime
D194.33 ± 0.89588.37 ± 5.6892.33 ± 1.2294.50 ± 1.10449.13 ± 7.94115.36 ± 12.48
D297.03 ± 1.37121.97 ± 1.2597.24 ± 0.0896.84 ± 0.08111.89 ± 2.3479.24 ± 5.11
D397.64 ± 2.10157.52 ± 2.7698.31 ± 0.7399.59 ± 1.05139.61 ± 1.83124.48 ± 10.02
D484.62 ± 0.30116.75 ± 1.3085.76 ± 0.5483.37 ± 0.09108.24 ± 0.2272.70 ± 8.21
D5389.48 ± 70.34234.38 ± 46.57423.26 ± 68.75315.16 ± 75.90471.42 ± 70.2985.94 ± 8.25
D669.31 ± 0.20135.53 ± 1.9371.70 ± 0.8470.96 ± 0.09115.97  ± 1.5489.77 ± 3.83
D7100.13 ± 0.11122.23 ± 1.44102.85 ± 0.77100.28 ± 0.11112.59 ± 0.1097.86 ± 14.02
D8137.66 ± 0.98192.54 ± 2.72140.54 ± 1.04135.96 ± 0.53174.12 ± 0.63125.01 ± 16.93
D965.53 ± 1.3290.74 ± 0.5165.83 ± 0.2265.31 ± 0.0482.29 ± 0.3865.51 ± 8.20
D1066.23 ± 0.0392.81 ± 0.7867.01 ± 0.0566.53 ± 0.0861.54 ± 6.9458.60 ± 3.28
D1196.49 ± 0.83278.54 ± 3.7097.79 ± 0.8998.97 ± 0.49228.26 ± 4.1594.67 ± 19.25
D1275.61 ± 1.18142.31  ± 2.8275.95 ± 0.5477.32  ± 0.26122.87 ± 1.38104.94 ± 6.15
D13664.07 ± 6.49635.02 ± 6.44578.17 ± 4.53707.68 ± 25.26630.96  ± 15.30507.11 ± 89.94
D14417.71 ± 32.64192.66 ± 6.09242.64 ± 49.37371.89 ± 77.69123.42 ± 7.40120.51 ± 7.98
D1552.69 ± 0.44637.31 ± 8.2951.72 ± 0.5455.87 ± 0.67473.91 ± 8.0064.38 ± 6.19
D1641.03 ± 0.42144.48 ± 0.4541.60 ± 0.0540.72 ± 0.06111.79 ± 0.6828.65 ± 0.09
D1753.97 ± 0.8488.93 ± 0.4954.70 ± 0.3653.14 ± 0.0654.37 ± 11.0658.93 ± 10.84
D18122.17 ± 21.75126.41 ± 39.18117.47 ± 9.85122.53 ± 29.08115.12 ± 7.1275.68 ± 7.61
D1986.25 ± 20.06120.99 ± 0.71107.40 ± 20.81105.63 ± 21.03100.11 ± 23.9256.36 ± 8.58
D2060.99 ± 0.57173.97 ± 2.5360.23 ± 0.0661.72 ± 0.83138.64 ± 0.1471.42 ± 12.01
D2165.07 ± 0.09107.92 ± 1.9865.81 ± 0.1066.57  ± 0.3897.23 ± 1.2350.55 ± 0.75
D22232.57 ± 22.23478.93 ± 141.93249.02 ± 76.10304.31 ± 39.79281.79 ± 38.3078.64 ± 2.29
D231181.20 ± 68.711204.65 ± 108.301204.61 ± 86.491178.79  ± 69.781186.05 ± 72.15576.16 ± 77.74
D24123.54 ± 1.21155.56 ± 2.17128.09 ± 0.16121.07 ± 0.15119.57  ± 10.5288.02 ± 0.27
D2596.00 ± 0.86159.74 ± 2.2897.21 ± 0.8098.47 ± 0.63138.97 ± 1.37126.37 ± 5.85
D2655.53 ± 0.1081.56 ± 0.6257.16  ± 0.0455.83 ± 0.0750.07 ± 4.2442.41 ± 0.05
D2751.53 ± 1.0379.69 ± 0.1850.41 ± 0.0549.34 ± 0.0470.65 ± 0.1445.02 ± 2.34
D289275.34 ± 1507.7710,200.77 ± 2736.289238.49 ± 1519.098351.13 ± 1349.943107.74 ± 482.366432.33  ± 801.44
Table 7. Average results obtained by different improved factors of IBGJO.
Table 7. Average results obtained by different improved factors of IBGJO.
D1D7D10D12D13D15D18D23D25
BGJOAccuracy0.65510.74680.85690.87270.97360.88330.91320.98000.9598
Feature.N175.034.004.3015.7722.87197.4310.008.6712.80
Fitness0.34770.25570.14500.13070.03250.12160.09030.02390.0440
Time449.13112.5961.54122.87630.96473.91115.121186.05138.97
T-BGJOAccuracy0.65430.74680.85190.87280.97330.88290.91020.97940.9598
Feature.N173.534.005.2714.6023.07198.479.608.4712.80
Fitness0.34850.25570.15070.13020.03280.12200.09310.02440.0440
Time152.0793.6056.7280.57607.5941.6061.62539.04104.90
C-BGJOAccuracy0.65530.74680.85740.87270.97420.88380.91220.97970.9597
Feature.N174.904.004.3715.5322.60197.0710.338.5712.90
Fitness0.34760.25570.14450.13060.03180.12120.09140.02410.0441
Time119.85100.2863.25107.20503.2159.3673.73550.47127.88
IBGJOAccuracy0.65460.74680.85750.87380.97460.88540.91530.98000.9599
Feature.N175.134.003.9014.7323.13202.1310.178.2712.23
Fitness0.34820.25570.14400.12930.03160.11970.08820.02380.0437
Time115.3697.8658.60104.94507.1164.3875.68576.16126.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, K.; Liu, Y.; Mei, F.; Sun, G.; Jin, J. IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection. Entropy 2023, 25, 1128. https://doi.org/10.3390/e25081128

AMA Style

Zhang K, Liu Y, Mei F, Sun G, Jin J. IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection. Entropy. 2023; 25(8):1128. https://doi.org/10.3390/e25081128

Chicago/Turabian Style

Zhang, Kunpeng, Yanheng Liu, Fang Mei, Geng Sun, and Jingyi Jin. 2023. "IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection" Entropy 25, no. 8: 1128. https://doi.org/10.3390/e25081128

APA Style

Zhang, K., Liu, Y., Mei, F., Sun, G., & Jin, J. (2023). IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection. Entropy, 25(8), 1128. https://doi.org/10.3390/e25081128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop