Abstract:Nature has long inspired the development of swarm intelligence (SI), a key branch of artificial intelligence that models collective behaviors observed in biological systems for solving complex optimization problems. Particle swarm optimization (PSO) is widely adopted among SI algorithms due to its simplicity and efficiency. Despite numerous learning strategies proposed to enhance PSO's performance in terms of convergence speed, robustness, and adaptability, no comprehensive and systematic analysis of these strategies exists. We review and classify various learning strategies to address this gap, assessing their impact on optimization performance. Additionally, a comparative experimental evaluation is conducted to examine how these strategies influence PSO's search dynamics. Finally, we discuss open challenges and future directions, emphasizing the need for self-adaptive, intelligent PSO variants capable of addressing increasingly complex real-world problems.
Abstract:Multi-modal optimization involves identifying multiple global and local optima of a function, offering valuable insights into diverse optimal solutions within the search space. Evolutionary algorithms (EAs) excel at finding multiple solutions in a single run, providing a distinct advantage over classical optimization techniques that often require multiple restarts without guarantee of obtaining diverse solutions. Among these EAs, differential evolution (DE) stands out as a powerful and versatile optimizer for continuous parameter spaces. DE has shown significant success in multi-modal optimization by utilizing its population-based search to promote the formation of multiple stable subpopulations, each targeting different optima. Recent advancements in DE for multi-modal optimization have focused on niching methods, parameter adaptation, hybridization with other algorithms including machine learning, and applications across various domains. Given these developments, it is an opportune moment to present a critical review of the latest literature and identify key future research directions. This paper offers a comprehensive overview of recent DE advancements in multimodal optimization, including methods for handling multiple optima, hybridization with EAs, and machine learning, and highlights a range of real-world applications. Additionally, the paper outlines a set of compelling open problems and future research issues from multiple perspectives
Abstract:In recent years, multi-operator and multi-method algorithms have succeeded, encouraging their combination within single frameworks. Despite promising results, there remains room for improvement as only some evolutionary algorithms (EAs) consistently excel across all optimization problems. This paper proposes mLSHADE-RL, an enhanced version of LSHADE-cnEpSin, which is one of the winners of the CEC 2017 competition in real-parameter single-objective optimization. mLSHADE-RL integrates multiple EAs and search operators to improve performance further. Three mutation strategies such as DE/current-to-pbest-weight/1 with archive, DE/current-to-pbest/1 without archive, and DE/current-to-ordpbest-weight/1 are integrated in the original LSHADE-cnEpSin. A restart mechanism is also proposed to overcome the local optima tendency. Additionally, a local search method is applied in the later phase of the evolutionary procedure to enhance the exploitation capability of mLSHADE-RL. mLSHADE-RL is tested on 30 dimensions in the CEC 2024 competition on single objective bound constrained optimization, demonstrating superior performance over other state-of-the-art algorithms in producing high-quality solutions across various optimization scenarios.