Abstract
In this paper two parameter self-adaptation schemes are proposed for the MOEA/D-DE algorithm. These schemes use the fitness improvement ration to change four parameter values for every individual separately, as long as in the MOEA/D framework every individual solves its own scalar optimization problem. The first proposed scheme samples new values and replaces old values with new ones if there is an improvement, while the second one keeps a set of memory cells and updates the parameter values using the weighted sum. The proposed methods are testes on two sets of benchmark problems, namely MOEADDE functions and WFG functions, IGD and HV metrics are calculated. The results comparison is performed with statistical tests. The comparison shows that the proposed parameter adaptation schemes are capable of delivering significant improvements to the performance of the MOEA/D-DE algorithm. Also, it is shown that parameter tuning is better than random sampling of parameter values. The proposed parameter self-adaptation techniques could be used for other multi-objective algorithms, which use MOEA/D framework.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The multi-objective and many-objective optimization techniques based on evolutionary algorithms (EA) and swarm intelligence algorithms (SI) have proved to be efficient for solving complex problems due to their population-based nature. The success of SPEA2 [1], NSGA-II [2], MOEA/D [3] and other methods developed later have shown that it is possible to find representative sets of points of the true Pareto set even for many objectives. However, very small attention has been paid to the underlying optimization techniques, used in the multi-objective optimization algorithms. In most cases, the multi-objective evolutionary algorithm relies on problem specific operators, for example, simulated binary crossover (SBX) and polynomial mutation [4, 5] form genetic algorithms (GA) or differential evolution (DE) mutation and crossover operators [6] in case of numerical optimization.
The search operators taken from single-objective optimization algorithms have a set of parameter values, which influence the algorithm efficiency, and for single objective optimization various self-adaptation schemes have been proposed [7]. However, applying these techniques to multi-objective optimization is problematic due to difficulties in estimating the parameters influence on the algorithm performance. Moreover, MOEA aims to find a set of points, each in its own region of search space, while single-objective algorithms need to find only a single solution. To solve the mentioned problem some research have been made about adaptive operator selection (AOS) in MOEAs, for example JADE2 [8], MOSaDE [9], MODE/SN [10] and MOEA/D-FRRMAB [11].
In this study the scheme of operator efficiency estimation proposed in [11] is used to adaptively tune the search parameters of DE and GA operators, with a set of parameters kept separately for every point. The MOEA/D-DE algorithm is taken as baseline, and two approaches are proposed: the self-adaptation (SA) similar to the one used in [12], and the success-history adaptation, proposed in [13]. The experiments are performed on DTLZ and WFG sets of problems, the results are compared with statistical tests.
The rest of the paper is organized as follows: Sect. 2 describes the MOEA/D framework and search operators, Sect. 3 proposes the new parameter adaptation schemes, Sect. 4 contains experimental setup and results, and Sect. 5 concludes the paper.
2 MOEA/D Algorithm and Self-tuning
The multi-objective evolutionary algorithm based on decomposition (MOEA/D) was originally proposed in [3]. The main idea of this approach was to decompose the initial problem into a set of scalar problems. The multi-objective optimization problem (MOP) is formulated as follows:
where Ω \( \subset \) Rm is the variable space, x is a solution, and F: Ω \( \to \) Rm is a vector-function to be optimized.
The MOEA/D framework proposes several methods of problem decomposition, including weighted sum, Tchebycheff and penalized boundary intersection approaches [3]. The Tchebycheff approach is one of the commonly used methods, with scalar optimization problems defined as follows:
where λ is a weight vector, z* is a reference point. The MOEA/D algorithm defines a set of weight vectors λj, j = 1 … N, N is the population size. In this manner, the algorithm optimizes N scalar problems in one run, allowing finding representative distribution of points in the Pareto front, if the corresponding weight vectors are evenly distributed.
The variation operators, used in MOEA/D, should it be the SBX crossover or DE mutation operators, use a neighborhood of size T, which is defined based on distances between the weight vectors λ. So, for every individual i = 1…N a set of vectors B(i) = {i1, …, iT}, where (λi1… λiT) are T closest vectors to λi. In case of SBX crossover one of B(i) is chosen, and in case of DE all 3 vectors are chosen from B(i) to perform mutation.
The implementation of self-tuning in multi-objective framework requires the definition of improvement rates, i.e. feedback values, which could be used to drive parameters towards optimal values at every stage of the search. In [14] the fitness improvement rates (FIR) were proposed to solve this problem, defined as follows:
where pfi,t is the fitness of parent, cfi,t is the fitness of child for individual i at step t. As long as MOEA/D solves a set of N scalar optimization problems, it is possible to calculate improvements rates in a similar manner to single-objective optimization algorithms.
Further in this study the DE will be used as the main optimization engine. The main idea of DE is to use the scaled difference vectors between the members of the population to produce new solutions. Classical DE uses rand/1 mutation strategy:
where r1, r2 and r3 are mutually different indexes chosen from B(i), and F is the scaling factor, vi is the mutant vector. The crossover is performed with probability Cr:
where jrand is the randomly selected index from [1, D], required to make sure that at least one coordinate is inherited from the mutant vector, ui is the trial vector.
The polynomial mutation is performed with probability pm and scale parameter η. This operator is applied after the crossover step to produce an offspring. Totally, there are 4 numeric parameters to be tuned: scaling factor F, crossover rate Cr, mutation probability pm and mutation parameter η.
The next section contains the description of the proposed self-adaptation schemes for the MOEA/D-DE algorithm.
3 Proposed Parameter Self-adaptation Schemes
Among single-objective EAs, it is well known that adaptation of parameter values to the problem in hand is an important part of every algorithm. However, for multi-objective algorithms the typical scenario is to use fixed parameters, or probably use several operators, as described in [14] to adapt the algorithm to the problem being solved. One more important difference of MOPs is that each point in the population is a part of the final solution, and its optimal position is in an area of search space, different from other areas, so that the properties of fitness functions landscape could vary significantly for every individual, especially for decomposition-based approaches.
Taking into account these considerations, one may come up with an idea of parametric adaptation, where every individual has its own set of parameter values, which could be considered as suboptimal for the problem in hand. Similar are already known from literature, where in jDE algorithm [12] pairs of F and Cr parameters were stored for every individual separately. As the search proceeded, these F and Cr were updated in pursuit of finding best possible values. Another well-known approach for parameter adaptation was proposed in the SHADE algorithm [13]. In SHADE algorithm, as well as those developed based on it, a set of H memory cells (usually H = 5) is maintained to keep the most suitable parameter values. The memory cells are used to sample new parameter values, and are updated with respect to the fitness improvement rates.
Based on these ideas, the first parameter self-adaptation method is proposed. For every individual i = 1, …, N in the population the a set of parameter values [Fi, Cri, pmi, ηi] is maintained. Initially all these memory cells set to fixed values, for example [0.5, 1, 1/D, 20]. For every individual, new parameters are sampled using normal distribution:
If some of the parameters were sampled below 0, then they were sampled again. However, if F, Cr or pm were above 1, then the value of 1 was kept.
After the application of combination and variation operators, the fitness value of the child is compared to the parent’s fitness, and if there is an improvement, then the new parameter values replace the old ones. This approach will be further referred to as MOEA/D-DE-SA.
The second self-adaptation approach uses the idea of success-history based adaptation, where the parameter values are averaged over several last steps. For every individual a set of H = 5 memory cells is maintained, with MFi,h, MCri,h, Mpmi,h and Mηi,h values, i = 1, …, N, h = 1, …, H. Memory cell initialization and sampling is performed in the same way as in MOEA/D-DE-SA, however, the update scheme is changed. The FIR values from Eq. 3 are calculated and used as weights wi,h for the update, performed as follows:
where \( F_{i}^{new} \) and \( F_{i}^{old} \) are the new and old values of F parameter. The update scheme for Cr and pm parameters is the same as for F. Note that if there was no improvement, i.e. FIR < 0, the weight was set to zero, and if all weights were zero, there was no parameter update. The h index responsible for the memory cell index to write the temporary parameter values is incremented every generation, and if the h value exceeds H = 5, then it is reset back to 1. This algorithm will be further referred to as MOEA/D-DE-SHA.
In addition to the described algorithm, the approach with random sampling was used, i.e. the parameters were generated as shown in Eq. 6, but no memory cells update schemes were applied. This approach will be further referred to as MOEA/D-DE-RS. The experimental setup and results are presented in the next section.
4 Experimental Setup and Results
The experiments were performed on a set of benchmark problems proposed in [14] for the MOEA/D-DE algorithm, as well as on a set of WFG problems [15]. The population size was set to 100, actual population size depended on the number of objectives. The maximum number of function evaluations was set to 10000 for all problems. The algorithms were implemented using the PlatEMO 2.5 system [16].
For every test function 30 independent runs were performed, and the inverted generational distance (IGD) and hypervolume (HV) metrics were calculated. All WFG functions had m = 3 objectives, while all MOEA/D-DE functions had 2 objectives except for F6, which had 3 objectives.
To compare the performance of different algorithms, the Wilcoxon rank sum statistical test with significance level p = 0.05 was used. Table 1 shows the IGD values for all test problems, and Table 2 shows HV values.
The best average values in Tables 1 and 2 are marked. If the MOEA/D-DE-SHA was significantly better than the other algorithm, then the “+” sign was used, if worse, then “−” sign, otherwise “=”. From Tables 1 and 2 it could be seen that MOEA/D-DE-SA and MOEA/D-DE-SHA outperform both standard MOEA/D-DE with fixed parameters and the MOEA-D/DE-RS with random sampling. However, even the random sampling is most of the times better than fixed parameter values.
Comparing MOEA/D-DE-SA and MOEA/D-DE-SHA, in terms of IGD metric the former has shown better results, i.e. significantly better on 4 problems out of 18, but for HV metric the two approaches have similar performance, i.e. 3 wins and 3 losses. But, it is important to mention that MOEA/D-DE-SHA has achieved better IGD and HV more often than other methods.
Figure 1 shows the change of parameters of the MOEA/D-DE-SHA algorithm during one of the runs. Cr and pm parameters start from 1 and gradually reduce to around 0.5–0.7, while for F value the average value, shown by black line, oscillates near 0.5. It is seen, especially on the η graph, that some of the parameter values gradually increase, while others decrease, allowing each point to adapt to its own part of the goal function landscape.
Similar graphs were obtained for other test problems, and the general trend is that the average values of parameters do not change much. For the MOEA/D-DE-SA algorithm, the parameter changes are sharper, as the previous parameter value does not influence the new one directly.
5 Conclusion
The parameter adaptation mechanism is an important part of every single-objective evolutionary and swarm intelligence search algorithm, which allows significant improvement of efficiency. Likewise, for multi-objective optimization, the parameter adaptation scheme allows receiving more representative Pareto sets, as demonstrated in this study. The proposed parameter adaptation schemes, MOEA/D-DE-SA and MOEA/D-DE-SHA use the improvement ratio, which is relatively easy to calculate for the MOEA/D framework, however, the ideas of these algorithms could be used for other multi-objective optimization approaches.
This study shows that there is a potential in improvement of MOEAs by developing new parameter adaptation schemes, as well as new improvement ration estimation approaches. Further studies in this direction may include testing the proposed SA and SHA algorithms on algorithms using SBX crossover, or other problem-specific operators, which have numerical parameters.
References
Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: improving the strength Pareto evolutionary algorithm. Technical report TIK-Report103, Swiss Federal Institute of Technology, Zurich, Germany (2001)
Deb, K., Pratab, A., Agrawal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007)
Deb, K., Agrawal, R.B.: Simulated binary crossover for continuous search space. Complex Syst. 9(2), 115–148 (1995)
Deb, K., Deb, D.: Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 4(1), 1–28 (2014)
Das, S., Mullick, S.S., Suganthan, P.N.: Recent advances in differential evolution – an updated survey. Swarm Evol. Comput. 27, 1–30 (2016)
Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 3(2), 124–141 (1999)
Zhang, J., Sanderson, A.C.: Self-adaptive multiobjective differential evolution with direction information provided by archived inferior solutions. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 2806–2815, July 2008
Huang, V.L., Qin, A.K., Suganthan, P.N., Tasgetiren, M.F.: Multiobjective optimization based on self-adaptive differential evolution algorithm. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 3601–3608, May 2007
Li, K., Kwong, S., Wang, R., Cao, J., Rudas, I.J.: Multiobjective differential evolution with self-navigation. In: Proceedings of the IEEE nternational Conference on Systems, Man, and Cybernetics, pp. 508–513, October 2012
Li, K., Fialho, A., Kwong, S., Zhang, Q.: Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 18(1), 114–130 (2014)
Brest, J., Greiner, S., Bošković, B., Mernik, M., Žumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 10(6), 646–657 (2006)
Tanabe, R., Fukunaga, A.: Success-history based parameter adaptation for differential evolution. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 71–78 (2013)
Li, H., Zhang, Q.: Multiobjective optimization problems with complicated Pareto sets MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 13(2), 284–302 (2009)
Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10(5), 477–506 (2006)
Tian, Y., Cheng, R., Zhang, X., Jin, Y.: PlatEMO: a MATLAB platform for evolutionary multi-objective optimization. IEEE Comput. Intell. Mag. 12(4), 73–87 (2017). https://doi.org/10.1109/mci.2017.2742868
Acknowledgments
The is work was supported by the internal grant of Reshetnev Siberian State University of Science and Technology for the support of young researchers.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Akhmedova, S., Stanovov, V. (2020). Success-History Based Parameter Adaptation in MOEA/D Algorithm. In: Tan, Y., Shi, Y., Tuba, M. (eds) Advances in Swarm Intelligence. ICSI 2020. Lecture Notes in Computer Science(), vol 12145. Springer, Cham. https://doi.org/10.1007/978-3-030-53956-6_41
Download citation
DOI: https://doi.org/10.1007/978-3-030-53956-6_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-53955-9
Online ISBN: 978-3-030-53956-6
eBook Packages: Computer ScienceComputer Science (R0)