Selection hyper-heuristics (SSHH) are search strategies that can be successfully applied to multi-objective optimization (MOO) problems. They showed resilient results to different types of optimization problems, which may come at the cost of a longer resolution time than other metaheuristics. Learning a single selection rule in MOO might be limiting because the information from the multiple objectives might be unexploited. A novel approach named the multi-policy approach is proposed and discussed to further enhance the searching ability of sequence-based selection hyper-heuristics, together with an ad-hoc learning rule. The availability of a set of problem-specific low-level heuristics is assumed and the resolution of a specific class of combinatorial problems, i.e., vehicle routing problems (VRPs), is considered for a numerical example. The multi-policy approach showed the ability to learn different selection rules for each objective and to change them during the execution of the algorithm, i.e., when solutions are getting closer to the Pareto front. The availability of parallel computation can speed up the calculation since the methodology is suitable for parallelization. The proposed methodology was found to produce production-ready solutions and the approach is expected to be successfully extended to problems different from the VRP.

A multi-policy sequence-based selection hyper-heuristic for multi-objective optimization / Urbani, Michele; Pilati, Francesco. - (2023), pp. 415-418. (Intervento presentato al convegno GECCO’23 tenutosi a Lisbon, Portugal nel 15th-19th July 2023) [10.1145/3583133.3590663].

A multi-policy sequence-based selection hyper-heuristic for multi-objective optimization

Urbani, Michele
Primo
;
Pilati, Francesco
Ultimo
2023-01-01

Abstract

Selection hyper-heuristics (SSHH) are search strategies that can be successfully applied to multi-objective optimization (MOO) problems. They showed resilient results to different types of optimization problems, which may come at the cost of a longer resolution time than other metaheuristics. Learning a single selection rule in MOO might be limiting because the information from the multiple objectives might be unexploited. A novel approach named the multi-policy approach is proposed and discussed to further enhance the searching ability of sequence-based selection hyper-heuristics, together with an ad-hoc learning rule. The availability of a set of problem-specific low-level heuristics is assumed and the resolution of a specific class of combinatorial problems, i.e., vehicle routing problems (VRPs), is considered for a numerical example. The multi-policy approach showed the ability to learn different selection rules for each objective and to change them during the execution of the algorithm, i.e., when solutions are getting closer to the Pareto front. The availability of parallel computation can speed up the calculation since the methodology is suitable for parallelization. The proposed methodology was found to produce production-ready solutions and the approach is expected to be successfully extended to problems different from the VRP.
2023
GECCO’23 Companion: Proceedings of the 2023 Genetic and Evolutionary Computation Conference Companion
New York, NY, USA
ACM
9798400701207
Urbani, Michele; Pilati, Francesco
A multi-policy sequence-based selection hyper-heuristic for multi-objective optimization / Urbani, Michele; Pilati, Francesco. - (2023), pp. 415-418. (Intervento presentato al convegno GECCO’23 tenutosi a Lisbon, Portugal nel 15th-19th July 2023) [10.1145/3583133.3590663].
File in questo prodotto:
File Dimensione Formato  
3583133.3590663.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 599.54 kB
Formato Adobe PDF
599.54 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/399144
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact