A Complexity Calculation Method for Large Scale Optimization With Evolutionary Algorithms and Metaheuristics


Sarı Z., Yildirim M.

CONCURRENCY COMPUTATION PRACTICE AND EXPERIENCE, cilt.38, sa.8, ss.1-11, 2026 (Scopus)

Özet

This study proposes a general‐purpose computational complexity method for evaluating the success of metaheuristics and evolutionary algorithms used in solving problems of very large‐scale global optimization (LSGO) problems. The contribution of this study is not to propose a new asymptotic complexity theory; rather, it aims to define how objective function evaluations (FE) in metaheuristic and evolutionary optimization algorithms can be calculated in a standard, explicit, and reproducible manner. This reveals the impact of design variations such as decomposition, weighting, special operators, and additional trials on total FE consumption, allowing for a fairer budget comparison between different methods. The frequently used big‐O computational complexity method is not directly applicable to comparing metaheuristics or evolutionary algorithms. Therefore, researchers have compared the results of their studies with the results of other studies presented in the literature or with benchmark functions to test the performance of their methods, strategies, and improvements. They typically compare solution times or fitness values obtained for a given number of iterations, decision variable sizes, and population size. However, the internal complexity of algorithms, the speed of the computer used, the skill of the programmer, and so on factors often affect the solution time and these cannot be isolated in the comparisons. The proposed method is designed to get rid of the effects of population size, number of iterations, hardware specifications, programmer/implementation differences, and whether a decomposition is used, enabling fair, simple, and efficient comparisons between algorithms. The study also experimentally demonstrates how these parameters affect comparison results and presents a fair evaluation method that balances these effects. This method aims to increase the reliability of methodological comparisons in the context of LSGO and to facilitate more consistent interpretations of performance reports in the literature.