Menu Close

JULY 2024: THE ENTIRE PODIUM IS OURS! AND THAT IS NOT ALL

At the World Competitions for the Best Optimization Algorithm, which are held annually among the world’s leading scientific teams, our scientists constantly take prizes. We have already written about this: (https://www.sibsau.ru/content/2590/).

Then, in 2021, our scientists took first place, ahead of the silver medalists by 15%. In the 2022 competition, they took second place, only 2% behind the winners. It is interesting to note that the 2022 winners used the 2021 winning algorithm, improving it to a slightly better level than our specialists.

However, what happened this year is an outstanding event. The algorithm of our scientists took three prizes in three competitions. In July 2024, the entire podium is ours! So, at the next world championship of optimization algorithms at the World Congress on Evolutionary Algorithms (CEC’2024, Yokohama, Japan), two types of competitions were held. These were the competition of unconditional optimization algorithms, when the independent variables of the objective function could take arbitrary values, and the competition of conditional optimization algorithms, when significant restrictions were imposed on the variables. Experts know that these are two fundamentally different problem statements and require the use of very different ideas and approaches.

In the unconditional optimization competition, our scientists’ algorithm1 became the winner. In the conditional optimization competition, our algorithm2 became the silver medalist. A great result.

However, that is not all. Soon, at the GECCO (The Genetic and Evolutionary Computation Conference) in Melbourne, Australia, a competition of algorithms for unconditional optimization of special functions was held. The organizers proposed functions with parameters that allowed them to change the optimization landscape in unpredictable ways for the participants. In such a situation, the algorithms using local search and other methods of tracking local changes in the properties of the optimized function were supposed to win. The first and second places were taken by such algorithms, the best of all similar ones, but the third prize was unexpectedly taken by the universal self-redesigning algorithm of our scientists3. It seems that it also has the properties needed in such situations, although it was not specially designed for them. As they say, it figured it out itself. So smart, and at the same time stochastic, i.e. using randomly selected strategies. As the founder of random search in the USSR, Professor Leonard Andreevich Rastrigin said, ‘Randomness is good because it obviously includes all options, including the best ones. Teaching randomness to give us the joy of the best choice is the task of specialists’. Our scientists have taught it. Again, quoting Leonard Andreevich, ‘Knowledge of some principles replaces knowledge of many facts.’ Using the principles of self-learning allows us to save time and effort on inventing new algorithms. The algorithm of our scientists has self-learned and redesigned itself for new tasks.

Congratulations to our scientists on yet another outstanding world-class achievement!

If the reader is interested in such remarkable optimization algorithms, self-tuning so much that they become applicable in all cases, and indeed the very best ones, then it is possible to get acquainted with them by reading the articles linked below, and by going to the programs on GitHub, the links to which are also provided4,5,6.

We also should note that the author of the outstanding scientific results of this summer, Vladimir Vadimovich Stanovov, PhD, associate professor of the Department of Higher Mathematics at SibSU, has a pedagogical achievement this year as well. His Master’s degree student Eduard Morozov won the competition of grants of the President of the Russian Federation for an educational and scientific internship abroad. By the way, he was the only student at SibSU who won this competition this year. Eduard will spend the next semester at the Shenyang Aerospace University, People’s Republic of China, a long-standing partner of our university, studying the artificial intelligence technologies used there

1 V. Stanovov and E. Semenkin, “Success Rate-based Adaptive Differential Evolution L-SRTDE for CEC 2024 Competition,” 2024 IEEE Congress on Evolutionary Computation (CEC), Yokohama, Japan, 2024, pp. 1-8. (https://ieeexplore.ieee.org/document/10611907)
2 V. Stanovov and E. Semenkin, “Differential Evolution with Success Rate-based adaptation CL-SRDE for Constrained Optimization,” 2024 IEEE Congress on Evolutionary Computation (CEC), Yokohama, Japan, 2024, pp. 1-8. (https://ieeexplore.ieee.org/document/10612145)
3 Stanovov V. Success Rate-based Adaptive Differential Evolution L-SRTDE for GNBG 2024 Competition // GECCO 2024 technical report (https://competition-hub.github.io/GNBG-Competition/)., Melbourne, 2024.
4 CEC L-SRTDE: https://github.com/VladimirStanovov/L-SRTDE_CEC-2024
5 CEC CL-STDE: https://github.com/VladimirStanovov/CL-SRDE_CEC-2024
6 GECCO: https://github.com/VladimirStanovov/L-SRTDE_GNBG-24/tree/master