Several constrained optimization problems have been adequately solved over the years thanks to the advances in the area of metaheuristics. Nevertheless, the question as to which search logic performs better on constrained optimization often arises. In this paper, we present Dual Search Optimization (DSO), a co-evolutionary algorithm that includes an adaptive penalty function to handle constrained problems. Compared to other self-adaptive metaheuristics, one of the main advantages of DSO is that it is able auto-construct its own perturbation logics, i.e., the ways solutions are modified to create new ones during the optimization process. This is accomplished by co-evolving the solutions (encoded as vectors of integer/real values) and perturbation strategies (encoded as Genetic Programming trees), in order to adapt the search to the problem. In addition to that, the adaptive penalty function allows the algorithm to handle constraints very effectively, yet with a minor additional algorithmic overhead. We compare DSO with several algorithms from the state-of-the-art on two sets of problems, namely: (1) seven well-known constrained engineering design problems and (2) the CEC 2017 benchmark for constrained optimization. Our results show that DSO can achieve state-of-the-art performances, being capable to automatically adjust its behavior to the problem at hand.
A co-evolutionary algorithm with adaptive penalty function for constrained optimization / de Melo, Vinícius Veloso; Nascimento, Alexandre Moreira; Iacca, Giovanni. - In: SOFT COMPUTING. - ISSN 1432-7643. - 2024, 28:19(2024), pp. 11343-11376. [10.1007/s00500-024-09896-5]
A co-evolutionary algorithm with adaptive penalty function for constrained optimization
Iacca, Giovanni
2024-01-01
Abstract
Several constrained optimization problems have been adequately solved over the years thanks to the advances in the area of metaheuristics. Nevertheless, the question as to which search logic performs better on constrained optimization often arises. In this paper, we present Dual Search Optimization (DSO), a co-evolutionary algorithm that includes an adaptive penalty function to handle constrained problems. Compared to other self-adaptive metaheuristics, one of the main advantages of DSO is that it is able auto-construct its own perturbation logics, i.e., the ways solutions are modified to create new ones during the optimization process. This is accomplished by co-evolving the solutions (encoded as vectors of integer/real values) and perturbation strategies (encoded as Genetic Programming trees), in order to adapt the search to the problem. In addition to that, the adaptive penalty function allows the algorithm to handle constraints very effectively, yet with a minor additional algorithmic overhead. We compare DSO with several algorithms from the state-of-the-art on two sets of problems, namely: (1) seven well-known constrained engineering design problems and (2) the CEC 2017 benchmark for constrained optimization. Our results show that DSO can achieve state-of-the-art performances, being capable to automatically adjust its behavior to the problem at hand.File | Dimensione | Formato | |
---|---|---|---|
Melo_et_al-2024-Soft_Computing (compressed).pdf
accesso aperto
Descrizione: online first
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
9.28 MB
Formato
Adobe PDF
|
9.28 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione