Loading...
 

Tutorials

GECCO 2024 will feature 32 tutorials.

TitleOrganizers
A Deep Dive into Robust Optimization Over Time: Problems, Algorithms, and Beyond
  • Danial Yazdani Data Science Institute, University of Technology Sydney, Sydney, Australia
  • Xin Yao Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China, and Center of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, Birmingham, United Kingdom.
A Gentle Introduction to Theory (for Non-Theoreticians)
  • Benjamin Doerr École Polytechnique, France
Bayesian Optimisation
  • Jürgen Branke University of Warwick, UK
  • Sebastian Rojas Gonzalez University of Ghent, Belgium
  • Ivo Couckuyt University of Ghent
Benchmarking and analyzing iterative optimization heuristics with IOHprofiler
  • Carola Doerr CNRS and Sorbonne University, France
  • Diederick Vermetten Leiden Institute for Advanced Computer Science
  • Jacob de Nobel Leiden Institute of Advanced Computer Science
  • Thomas Bäck LIACS, Leiden University, The Netherlands
Coevolutionary Computation for Adversarial Deep Learning
  • Jamal Toutouh University of Málaga, Málaga, Spain
  • Una-May O’Reilly MIT, USA
Constraint-Handling Techniques used with Evolutionary Algorithms
  • Carlos Coello Coello CINVESTAV-IPN, Mexico
Evolution of Neural Networks
  • Risto Miikkulainen The University of Texas at Austin and Cognizant Technology Solutions, USA
Evolutionary Art and Design in the Machine Learning Era
  • Penousal Machado University of Coimbra, CISUC, DEI
  • João Correia University of Coimbra, Portugal
Evolutionary Bilevel Optimization: Algorithms and Applications
  • Kalyanmoy Deb Michigan State University, USA
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
  • Stefano Cagnoni University of Parma
  • Ying Bi Zhengzhou University, China
  • Yanan Sun Sichuan University, China
Evolutionary Computation for Feature Selection and Feature Construction
  • Bing Xue Victoria University of Wellington, New Zealand
  • Mengjie Zhang Victoria University of Wellington, New Zealand
Evolutionary computation for stochastic problems
  • Frank Neumann University of Adelaide, Australia
  • Aneta Neumann The University of Adelaide, Australia
  • Hemant Kumar Singh University of New South Wales
Evolutionary Computation meets Machine Learning for Combinatorial Optimisation
  • Yi Mei School of Engineering and Computer Science, Victoria University of Wellington, New Zealand
  • Günther Raidl TU Wien, Austria
Evolutionary Machine Learning for Interpretable and eXplainable AI
  • Abubakar Siddique Wellington Institute of Technology, Te Pūkenga – Whitireia WelTec
  • Will N. Browne Queensland University of Technology, Australia
  • Ryan Urbanowicz Cedars Sinai Medical Center, Los Angeles, California, USA
Evolutionary Multiobjective Optimization (EMO)
  • Joshua Knowles SLB
  • Weijie Zheng Harbin Institute of Technology, Shenzhen
Evolutionary Reinforcement Learning
  • Manon Flageat Imperial College London
  • Bryan Lim Imperial College London
  • Antoine Cully Imperial College London, UK
Generative Hyper-heuristics
  • Daniel Tauritz Auburn University, USA
  • John Woodward Queen Mary University of London, UK
Genetic Improvement: Taking real-world source code and improving it using computational search methods
  • Alexander Brownlee University of Stirling
  • Saemundur Haraldsson University of Stirling
  • John R. Woodward Loughborough University, UK
  • Markus Wagner Monash University, Australia
Instance Space Analysis and Item Response Theory for Algorithm Testing
  • Kate Smith-Miles The University of Melbourne
  • Mario Andrés Muñoz School of Computer and Information Systems, The University of Melbourne, Australia.
  • Sevvandi Kandanaarachchi Statistical Machine Learning Group, Data61, CSIRO
Introduction to Quantum Optimization
  • Alberto Moraglio University of Exeter, UK
  • Francisco Chicano University of Malaga, Spain
Landscape Analysis of Optimization Problems and Algorithms
  • Gabriela Ochoa University of Stirling, UK
  • Katherine Malan University of South Africa
Linear Genetic Programming
  • Wolfgang Banzhaf Michigan State University
  • Ting Hu School of Computing, Queen's University, Canada
Model-Based Evolutionary Algorithms
  • Dirk Thierens Utrecht University, The Netherlands
  • Peter A. N. Bosman Centre for Mathematics and Computer Science, The Netherlands
New Framework of Multi-Objective Evolutionary Algorithms with Unbounded External Archive
  • Hisao Ishibuchi Southern University of Science and Technology
  • Lie Meng Pang Southern University of Science and Technology
  • Ke Shang Southern University of Science and Technology
Next Generation Genetic Algorithms - efficient crossover and local search and new results on crossover lattices
  • Darrell Whitley Colorado State University, United States
Representations for Evolutionary Algorithms
  • Franz Rothlauf Universität Mainz
Robot Evolution: from Virtual to Real
  • Gusz Eiben Vrije Universiteit Amsterdam
Runtime Analysis of Population-based Evolutionary Algorithms
  • Per Kristian Lehre University of Birmingham
Statistical Analyses for Single-objective Stochastic Optimization Algorithms
  • Tome Eftimov Jožef Stefan Institute, Slovenia
  • Peter Korošec Jožef Stefan Institute
Theory and Practice of Population Diversity in Evolutionary Computation
  • Dirk Sudholt University of Passau, Germany
  • Giovanni Squillero Politecnico di Torino, Italy
Transfer Learning in Evolutionary Spaces
  • Nelishia Pillay University of Pretoria
Using Large Language Models for Evolutionary Search
  • Una-May O’Reilly MIT, USA
  • Erik Hemberg Massachusetts Institute of Technology, CSAIL, Cambridge, USA


Tutorials

A Deep Dive into Robust Optimization Over Time: Problems, Algorithms, and Beyond

In the evolving landscape of optimization, Dynamic Optimization Problems (DOPs) manifest as a pivotal area of exploration. These problems, characterized by their changing search space over time, present a maze of challenges for optimization algorithms. While much of the existing literature on DOPs primarily focuses on tracking the moving optimum, many real-world DOPs present a different set of challenges and impose a distinct set of requirements. In many practical scenarios, frequent changes to deployed solutions are often undesirable. This aversion stems from various factors, including the high cost associated with switching between deployed solutions, limitations on the resources required to deploy new solutions, and the system's inability to tolerate frequent changes in the deployed solutions.

Robust Optimization Over Time (ROOT) emerges as a beacon in such dynamic scenarios, intertwining the principles of robust optimization and dynamic optimization to form a robust framework capable of navigating the turbulent waters of DOPs. ROOT acknowledges the high cost and resource limitations associated with frequent solution deployments, striving for algorithms capable of dealing with the implications of deploying or maintaining solutions over longer time horizons involving multiple environmental changes.

In this tutorial, we unravel the intricacies of ROOT, providing a gateway to understand, analyze, and address these problems adeptly. The tutorial is structured to offer a panoramic view of the ROOT realm, covering the underlying problems, innovative algorithms designed to tackle these problems, and benchmarks and performance indicators crucial for evaluating the robustness and effectiveness of these algorithms.

Key takeaways from this tutorial:
• Understanding DOPs: Unveil the nature, characteristics, and practical implications of optimization in dynamic environments.
• Distinguishing between ROOT, Tracking the Moving Optimum, and Robust Optimization: Uncover the unique aspects and differences between these three classes of optimization problems, providing a clearer understanding of the distinct challenges and solutions associated with ROOT.
• Exploring ROOT: Delve into the essence of Robust Optimization Over Time, understanding its significance, methodologies, and the rationale behind its inception.
• Algorithmic Innovations: Discover the algorithms at the forefront of ROOT, designed to navigate the dynamic optimization landscapes proficiently.
• Benchmarking and Performance Evaluation: Learn about the benchmarks integral for algorithm evaluation and the performance indicators that offer a lens to gauge algorithmic effectiveness in dynamic scenarios.
• Future Research Directions: Engage with the open challenges and potential future research avenues in ROOT, fostering a culture of inquiry and exploration.
The tutorial is designed to catalyze a deeper understanding and appreciation of ROOT, paving the way for researchers and practitioners to explore, innovate, and contribute to this fascinating domain. In addition to our core objective of informing and educating, we envision this tutorial as a vibrant forum for the exchange of ideas among researchers. Overall, this tutorial presents a unique opportunity to spotlight the latest developments on this compelling research topic to the Evolutionary Computation research community.

Danial Yazdani

Danial Yazdani received his Ph.D. degree in computer science from Liverpool John Moores University, Liverpool, U.K., in 2018. He is currently a Research Fellow at the Data Science Institute, University of Technology Sydney. Prior to that, he was a Research Assistant Professor with the Department of Computer Science and Engineering at the Southern University of Science and Technology, Shenzhen, China. His primary research interests include learning and optimization in dynamic environments, where he has contributed as the first author in over 20 peer-reviewed publications in this field, nine of which were published in prestigious IEEE/ACM Transactions. Dr. Yazdani was a recipient of the 2023 IEEE Computational Intelligence Society Outstanding PhD Dissertation Award, the Best Thesis Award from the Faculty of Engineering and Technology at Liverpool John Moores University, and the SUSTech Presidential Outstanding Postdoctoral Award from Southern University of Science and Technology.

Xin Yao

Xin Yao obtained his Ph.D. in 1990 from the University of Science and Technology of China (USTC), MSc in 1985 from North China Institute of Computing Technologies, and BSc in 1982 from USTC. He is a Chair Professor of Computer Science at the Southern University of Science and Technology, Shenzhen, China, and a part-time Professor of Computer Science at the University of Birmingham, UK. He is an IEEE Fellow and was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS). His major research interests include evolutionary computation, ensemble learning, and their applications to software engineering. Prof. Yao's paper on evolving artificial neural networks won the 2001 IEEE Donald G. Fink Prize Paper Award. He also won 2010, 2016, and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards, 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards. He received a prestigious Royal Society Wolfson Research Merit Award in 2012, the IEEE CIS Evolutionary Computation Pioneer Award in 2013, and the 2020 IEEE Frank Rosenblatt Award. He was the President (2014-15) of IEEE CIS and the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation.

A Gentle Introduction to Theory (for Non-Theoreticians)

This tutorial addresses GECCO attendees who do not regularly use theoretical methods in their research. For these, we give a smooth introduction to the theory of evolutionary computation. Complementing other introductory theory tutorials, we do not discuss mathematical methods or particular results, but explain:

- what the theory of evolutionary algorithms aims at,
- how theoretical research in evolutionary computation is conducted,
- how to interpret statements from the theory literature,
- what the most important theory contributions are, and
- what the theory community is trying to understand most at the moment.

Benjamin Doerr

Benjamin Doerr is a full professor at the French Ecole Polytechnique. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory of both problem-specific algorithms and randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for existing evolutionary algorithms, the determination of optimal parameter values, and complexity theoretic results. Benjamin's recent focus is the theory-guided design of novel operators, on-the-fly parameter choices, and whole new evolutionary algorithms, hoping that theory not only explains, but also develops evolutionary computation.

Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its co-chair 2007-2009, 2014, and 2023. He is a member of the editorial boards of "Artificial Intelligence", "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science", and three journals on classic algorithms theory. Together with Anne Auger, he edited the the first book focused on theoretical aspects of evolutionary computation ("Theory of Randomized Search Heuristics", World Scientific 2011). Together with Frank Neumann, he is an editor of the recent book "Theory of Evolutionary Computation - Recent Developments in Discrete Optimization" (Springer 2020).

Bayesian Optimisation

One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimization (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimization in engineering to stochastic optimization in operational research.

Bayesian optimization builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.

Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.

The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.

Jürgen Branke

Jürgen Branke is Professor of Operational Research and Systems at Warwick Business School (UK). His main research interests are the adaptation of metaheuristics to problems under uncertainty (including optimization in stochastic and dynamic environments) as well as evolutionary multiobjective optimization. Prof. Branke has been an active researcher in evolutionary computation for 25 years, and has published more than 200 papers in international peer-reviewed journals and conferences. He is editor of the ACM Transactions on Evolutionary Learning and Optimization, area editor of the Journal of Heuristics and associate editor of the IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal.

Sebastian Rojas Gonzalez

Sebastian Rojas Gonzalez is currently a post-doctoral research fellow at the Surrogate Modeling Lab, in Ghent University, Belgium. Until 2020 he worked at the Department of Decision Sciences of KU Leuven, Belgium, on a PhD in multiobjective simulation-optimisation. He previously obtained a M.Sc. in Applied Mathematics from the Harbin Institute of Technology in Harbin, China, and a masters degree in Industrial Systems Engineering from the Costa Rica Institute of Technology in 2012. Since 2016 his research work has centered on developing stochastic optimization algorithms assisted by machine learning techniques for multi-criteria decision making.

Ivo Couckuyt

Ivo Couckuyt received a M.Sc. degree in Computer Science from the University of Antwerp in 2007. In 2013 he finished a PhD degree working at the Internet Technology and Data Science Lab (IDLab) at Ghent University, Belgium. He was attached to Ghent University – imec where he workedas a post-doctoral research fellow until 2021. Since September 2021, he is an associate professor in the IDLab. His main research interests are automation in machine learning, data analytics, data-efficient machine learning and surrogate modeling algorithms.

Benchmarking and analyzing iterative optimization heuristics with IOHprofiler

Comparing and evaluating optimization algorithms is an important part of evolutionary computation, and requires a robust benchmarking setup to be done well. IOHprofiler supports researchers in this task by providing an easy-to-use, interactive, and highly customizable environment for benchmarking iterative optimizers.

IOHprofiler is designed as a modular benchmarking tool. The experimenter module provides easy access to common problem sets (e.g. BBOB functions) and modular logging functionality that can be easily combined with other optimization functions. The resulting logs (and logs from other platforms, e.g. COCO and Nevergrad) are fully interoperable with the IOHanalyzer, which provides access to highly interactive performance analysis, in the form of a wide array of visualizations and statistical analyses. A GUI, hosted at https://iohanalyzer.liacs.nl/ makes these analysis tools easy to access. Data from many repositories (e.g. COCO, Nevergrad) are pre-processed, such that the effort required to compare performance to existing algorithms is greatly reduced.

This tutorial will introduce the key features of IOHprofiler by providing background information on benchmarking in EC and showing how this can be done using the modules of IOHprofiler. The key components will be highlighted and demonstrated by the organizers. Guided examples will be provided to highlight the many aspects of algorithm performance which can be explored using the interactive GUI.

Carola Doerr

Carola Doerr, formerly Winzen, is a CNRS research director at Sorbonne Université in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Carola is associate editor of IEEE Transactions on Evolutionary Computation, ACM Transactions on Evolutionary Learning and Optimization (TELO) and board member of the Evolutionary Computation journal. She is/was program chair for the BBSR track at GECCO 2024, the GECH track at GECCO 2023, for PPSN 2020, FOGA 2019 and for the theory tracks of GECCO 2015 and 2017. She has organized Dagstuhl seminars and Lorentz Center workshops. Together with Pascal Kerschke, Carola leads the 'Algorithm selection and configuration' working group of COST action CA22137. Carola's works have received several awards, among them the CNRS bronze medal, the Otto Hahn Medal of the Max Planck Society, best paper awards at GECCO, CEC, and EvoApplications.

Diederick Vermetten

Diederick Vermetten is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHanalyzer. His research interests include benchmarking of optimization heuristics, dynamic algorithm selection and configuration as well as hyperparameter optimization.

Jacob de Nobel

Jacob de Nobel is a PhD student at LIACS, and is currently one of the core developers for the IOHexperimenter. His research concerns the real world application of optimization algorithms for finding better speech encoding strategies for cochlear implants, which are neuroprosthesis for people with profound hearing loss.

Thomas Bäck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

Coevolutionary Computation for Adversarial Deep Learning

In recent years, machine learning with Generative Adversarial Networks (GANs) has been recognized as a powerful method for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning. They have also been extended to handle semi-supervised and fully supervised learning paradigms. GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution, photograph editing, and text-to-image translation), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation, and AI-assisted video calls), finding application in domains of multimedia information, engineering, science, design, art, and games.

GANs are an adversarial paradigm. Two NNs compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive coevolution. In fact, the evolutionary computation community’s study of coevolutionary pathologies and its work on competitive and cooperative coevolutionary algorithms offers a means of solving convergence impasses often encountered in GAN training.

In this tutorial we will explain:
(a) The main concepts of generative modeling and adversarial learning.
(b) GAN gradient-based training and the main pathologies that prevent ideal convergence. Specifically, we will explain mode collapse, oscillation, and vanishing gradients.
(c) Coevolutionary algorithms and how they can be applied to train GANs. Specifically, we will explain how algorithm enhancements address non-ideal convergence
(d) To demonstrate we will draw upon the open-source Lipizzaner framework (url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs.
(e) Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial by means of a jupyter notebook expressly developed for teaching purposes.

 

Jamal Toutouh

Jamal Toutouh is a Researcher Assistant Professor at the University of Málaga (Spain). Previously, he was a Marie Skłodowska Curie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. He obtained his Ph.D. in Computer Engineering at the University of Malaga (Spain), which was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. His dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems. His current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Generative Modelling and Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying evolutionary computation and related techniques, such as particle swarm optimization in the form of Evolutionary Machine Learning approaches. Besides, he is on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.

Una-May O’Reilly

Dr. Una-May O'Reilly is the leader of ALFA Group at Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab. An evolutionary computation researcher for 20+ years, she is broadly interested in adversarial intelligence — the intelligence that emerges and is recruited while learning and adapting in competitive settings. Her interest has led her to study settings where security is under threat, for which she has developed machine learning algorithms that variously model the arms races of tax compliance and auditing, malware and its detection, cyber network attacks and defenses, and adversarial paradigms in deep learning. She is passionately interested in programming and genetic programming. She is a recipient of the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe and the ACM SIGEVO Award Recognizing Outstanding Achievements in Evolutionary Computation. Devoted to the field and committed to its growth, she served on the ACM SIGEVO executive board from SIGEVO's inception and held different officer positions before retiring from it in 2023. She co-founded the annual workshops for Women@GECCO and has proudly watched their evolution to Women+@GECCO. She was on the founding editorial boards and continues to serve on the editorial boards of Genetic Programming and Evolvable Machines, and ACM Transactions on Evolutionary Learning and Optimization. She has received a GECCO best paper award and an GECCO test of time award. She is honored to be a member of SPECIES, a member of the Julian Miller Award committee, and to chair the 2023 and 2024 committees selecting SIGEVO Awards Recognizing Outstanding Achievements in Evolutionary Computation.

Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe).

This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial
immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Coello Coello

Carlos Artemio Coello Coello received a PhD in Computer Science from Tulane University (USA) in 1996. His research has mainly focused on the design of new multi-objective optimization algorithms based on bio-inspired metaheuristics, which is an area in which he has made pioneering contributions. He currently has over 500 publications which, according to Google Scholar, report over 58,800 citations (with an h-index of 96). He has received several awards, including the National Research Award (in 2007) from the Mexican Academy of Science (in the area of exact sciences), the 2009 Medal to the Scientific Merit from Mexico City's congress, the Ciudad Capital: Heberto Castillo 2011 Award for scientists under the age of 45, in Basic Science, the 2012 Scopus Award (Mexico's edition) for being the most highly cited scientist in engineering in the 5 years previous to the award and the 2012 National Medal of Science in Physics, Mathematics and Natural Sciences from Mexico's presidency (this is the most important award that a scientist can receive in Mexico). He also received the Luis Elizondo Award from the Instituto Tecnológico de Monterrey in 2019. He is the recipient of the prestigious 2013 IEEE Kiyo Tomiyasu Award, ""for pioneering contributions to single- and multiobjective optimization techniques using bioinspired metaheuristics"", and of the 2016 The World Academy of Sciences (TWAS) Award in ?Engineering Sciences?. Since January 2011, he is an IEEE Fellow. He is also the recipient of the 2021 IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award. He is also Associate Editor of several international journals including Evolutionary Computation and the IEEE Transactions on Emerging Topics in Computational Intellience. Since January 2021, he is the Editor-in-Chief of the IEEE Transactions on Evolutionary Computation. He is currently Full Professor with distinction at the Computer Science Department of CINVESTAV-IPN in Mexico City, Mexico.

Evolution of Neural Networks

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network design, i.e. its architecture, hyperparameters, and other elements. While many such designs are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state-of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. At Cognizant, and previously as CTO of Sentient Technologies, he is scaling up these approaches to real-world problems. Risto is an IEEE Fellow, recipient of the IEEE CIS EC Pioneer Award, INNS Gabor Award, ISAL Outstanding Paper of the Decade Award, as well as 10 Best-Paper Awards at GECCO.

Evolutionary Art and Design in the Machine Learning Era

In recent years, there has been a surge in interest in Computational Creativity, with a particular emphasis on the adoption of Machine Learning (ML) techniques, particularly Deep Learning, in the realms of art and design. Generative Artificial Intelligence (AI) models such as Midjourney, Stable Diffusion and DALL-E, to name a few, are currently widespread and can be used to create a wide range of artefacts effortlessly.

While current generative ML models have achieved results, they are not without their flaws. As this tutorial will show, their inclination toward imitation rather than innovation often results in the generation of stock images rather than genuinely creative pieces.

These limitations open the door for the application of evolutionary computation techniques, including evolutionary machine learning (EML).

The goals of this tutorial encompass: (i) Present an overview of the current state of the art in Generative AI, distinguishing between data-driven models, such as deep learning models, and non-data-driven approaches, like most evolutionary methods; (ii) Identify the main challenges and opportunities for the application of EML in the fields of Art and Design, which includes the combination of evolutionary approaches with generative ML; (iii) present concrete examples of hybridization underscoring the uniqueness of their results; (iv) identify open challenges in the field.

In particular, we will address three main pillars for the development of creative and co-creative applications, namely representation, quality assessment, and user interaction. We will place a particular emphasis on applications and techniques that expand the creative possibilities of the user and lead to novel and unforeseen results.

We will also reflect upon the gap between academia and the real-world application of evo art approaches. We will provide examples of how to bridge it, either by incorporating evolutionary techniques in the artistic and design processes or by making them the ultimate goal.

Lastly, we will share our experience of using EML to create the very first AI-designed coin*, and provide participants access to our EML art frameworks. We are talking about a real coin. More precisely a 10€ silver coin minted by the Portuguese "House of Coins". We are not talking about a medal; it is legal currency, with all that implies (approval by the government, etc. The coin will be released on December 11, so it will be possible to take it to the tutorial.

Penousal Machado

Penousal Machado leads the Cognitive and Media Systems group at the University of Coimbra. His research interests include Evolutionary Computation, Computational Creativity, and Evolutionary Machine Learning. In addition to the numerous scientific papers in these areas, his works have been presented in venues such as the National Museum of Contemporary Art (Portugal) and the “Talk to me” exhibition of the Museum of Modern Art, NY (MoMA).

João Correia

João Correia is an Assistant Professor at the University of Coimbra, a researcher of the Computational Design and Visualization Lab. and a member of the Evolutionary and Complex Systems (ECOS) of the Centre for Informatics and Systems of the same university. He holds a PhD in Information Science and Technology from the University of Coimbra and an MSc and BS in Informatics Engineering from the same university. His main research interests include Evolutionary Computation, Machine Learning, Adversarial Learning, Computer Vision and Computational Creativity. He is involved in different international program committees of international conferences in the areas of Evolutionary Computation, Artificial Intelligence, Computational Art and Computational Creativity, and he is a reviewer for various conferences and journals for the mentioned areas, namely GECCO and EvoStar, served as remote reviewer for the European Research Council Grants and is an executive board member of SPECIES. He was also the publicity chair and chair of the International Conference of Evolutionary Art Music and Design conference, currently the publicity chair for EvoStar - The Leading European Event on Bio-Inspired Computation and chair of EvoApplications, the International Conference on the Applications of Evolutionary Computation. Furthermore, he has authored and co-authored several articles at the different International Conferences and journals on Artificial Intelligence and Evolutionary Computation. He is involved in national and international projects concerning Evolutionary Computation, Machine Learning, Generative Models, Computational Creativity and Data Science.

Evolutionary Bilevel Optimization: Algorithms and Applications

Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor-responding lower level variable vector is optimal for the lower level optimization problem. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the "nestedness" of one optimization task into another.

Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be dis-cussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.

Kalyanmoy Deb

Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University, USA. Prof. Deb's research interests are in evolutionary optimization and their application in multi-criterion optimization, modeling, and machine learning. He has been a visiting professor at various universities across the world including University of Skövde in Sweden, Aalto University in Finland, Nanyang Technological University in Singapore, and IITs in India. He was awarded IEEE Evolutionary Computation Pioneer Award for his sustained work in EMO, Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Distinguished Alumni Award from IIT Kharagpur, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE, ASME, and three Indian science and engineering academies. He has published over 548 research papers with Google Scholar citation of over 149,000 with h-index 123. He is in the editorial board on 18 major international journals. More information about his research contribution can be found from https://www.coin-lab.org.

Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.

Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.

The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. We take a focus on the use of evolutionary deep learning idea for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks (and autoencoders and genetic programming if time allows). The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results. Basic deep network-like approaches will also be discussed for GP, in which classification can be improved by co-evolving an embedding of the input pattern and a classifier at the same time or parametric regression can be used to build embeddings for signals or 2D patterns.

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.

Ying Bi

Ying Bi is currently a professor at Zhengzhou University, China. She received her PhD degree in 2020 from the Victoria University of Wellington (VUW), New Zealand. Her research focuses mainly on evolutionary computer vision and machine learning. She has published an authored book on genetic programming for image classification and over 50 papers in fully refereed journals and conferences. Dr Bi is currently the Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and a member of the IEEE CIS Task Force on Evolutionary Computation for Feature Selection and Construction. She is serving as the workshop chair of IEEE CEC 2024, organizer of the EDMML workshop in IEEE ICDM 2023, 2022, and 2021, and co-chair of the special session on ECVIP at IEEE CEC 2023, 2022 and IEEE CIMSIVP at IEEE SSCI 2023, 2022. She is serving as guest editor for two international journals. She has been serving as an organizing committee member of IEEE CEC 2019 and Australasian AI 2018, PC member/reviewer of over ten conferences and a reviewer of over twenty international journals.

Yanan Sun

Yanan Sun is a professor at Sichuan University, China. He has been a research postdoc at Victoria University of Wellington, New Zealand. His research focuses mainly on evolutionary neural architecture search. He has published >70 papers in fully referred journals and conferences, including IEEE TEVC, IEEE TNNLS, IEEE TCYB, NeurIPS, CVPR, ICCV, GECCO, and CEC. 12 out of the published papers have been selected as ESI Hot Paper, ESI Highly Cited Paper, IEEE CIS Chengdu Section Best Paper, AJCAI2024 Spotlight Paper, and MLMI2022 Best Paper. He is the funding chair of the IEEE CIS Task Force on Evolutionary Deep Learning and Applications. He is the leading chair of the special session on EDLA at IEEE CEC 2019, 2020, 2021, 2022, and 2024, and the symposium on ENASA at IEEE SSIC 2019-2023. He is an associate editor of IEEE TEVC, an associate editor of IEEE TNNLS, and an editorial member of Memetic Computing.

Evolutionary Computation for Feature Selection and Feature Construction

In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.

Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required. Furthermore, feature selection and dimensionality reduction has also been a main approach to explainable machine learning and interpretable AI.


The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.

Bing Xue

Bing Xue is currently Professor of Artificial Intelligence, and Deputy Head of School in the School of Engineering and Computer Science at Victoria University of Wellington. Her research focuses mainly on evolutionary computation, machine learning, big data, feature selection/learning, evolving neural networks, explainable AI and their real-world applications. Bing has over 300 papers published in fully refereed international journals and conferences including many highly cited papers and top most popular papers. Bing is currently the Editor of IEEE CIS Newsletter, Chair of the Evolutionary Computation Technical Committee, member of ACM SIGEVO Executive Committee and Chair of IEEE CIS Task Force on Evolutionary Deep Learning and Applications. She also chaired the IEEE CIS Data Mining and Big Data Technical Committee, Students Activities committee, and a member of many other committees. She founded and chaired IEEE CIS Task Force on Evolutionary Feature Selection and Construction, and co-founded and chaired IEEE CIS Task Force on Evolutionary Transfer Learning and Transfer Optimisation. She also won a number of awards including Best Paper Awards from international conferences, and Early Career Award, Research Excellence Award and Supervisor Award from her University, IEEE CIS Outstanding Early Career Award, IEEE TEVC Outstanding Associate Editor and others. Bing has also been served as an Associate/Guest Editor or Editorial Board Member for > 10 international journals, including IEEE TEVC, ACM TELO, IEEE TETCI, IEEE TAI, and IEEE CIM. She is a key organiser for many international conferences, e.g. Conference Chair of IEEE CEC 2024, Co-ambassador for Women in Data Science NZ 2023, Tutorial Chair for IEEE WCCI 2022, Publication Chair of EuroGP 2022, Track Chair for ACM GECCO 2019-2022, Workshop Chair for IEEE ICDM 2021, General Co-Chair of IVCNZ 2020, Program Co-Chair for KETO 2020, Senior PC of IJCAI 2019-2021, Finance Chair of IEEE CEC 2019, Program Chair of AJCAI 2018, IEEE CIS FASLIP Symposium founder and Chair since 2016, and others in international conferences. More can be seen from her website.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Evolutionary computation for stochastic problems

Many optimization problems involve stochastic and uncertain characteristics and evolutionary algorithms have been successfully applied to a wide range of such problems.

This tutorial will give an overview of different evolutionary computation approaches for dealing with stochastic problems. It will cover theoretical foundations as well as a wide range of concepts and approaches on how to deal with stochastic objective functions and/or stochastic constraints.
The material presented will range from theoretical studies involving runtime analysis to applications of evolutionary algorithms for stochastic problems in real-world scenarios such as engineering and mine planning.

Frank Neumann

Frank Neumann is a professor and the leader of the Optimisation and Logistics group at the University of Adelaide and an Honorary Professorial Fellow at the University of Melbourne. His current position is funded by the Australian Research Council through a Future Fellowship and focuses on AI-based optimisation methods for problems with stochastic constraints. Frank has been the general chair of the ACM GECCO 2016 and co-organised ACM FOGA 2013 in Adelaide. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and ACM Transactions on Evolutionary Learning and Optimization. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of cybersecurity, renewable energy, logistics, and mining.

Aneta Neumann

Aneta Neumann is a researcher in the School of Computer and Mathematical Sciences at the University of Adelaide, Australia, and focuses on real world problems using evolutionary computation methods. She is also part of the Integrated Mining Consortium at the University of Adelaide. Aneta graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany, and received her PhD from the University of Adelaide, Australia. She served as the co-chair of the Real-World Applications track at GECCO 2021 and GECCO 2022, and is a co-chair of the Genetic Algorithms track at GECCO 2023. Her main research interests are bio-inspired computation methods, with a particular focus on dynamic and stochastic multi-objective optimization for real-world problems that occur in the mining industry, defence, cybersecurity, creative industries, and public health.

Hemant Kumar Singh

Hemant Kumar Singh is an Associate Professor at the School of Engineering and Technology at the University of New South Wales (UNSW), Australia. He completed his PhD from UNSW in 2011 and B.Tech in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur in 2007. He worked with General Electric Aviation at John F. Welch Technology Centre as a Lead Engineer during 2011-13. His research interests include development of evolutionary computation methods to deal with various challenges such as multiple objectives, constraints, uncertainties, hierarchical (bi-level) objectives, and decision-making. He has co-authored over 125 refereed publications on these topics collectively. He is an Associate Editor for IEEE Transactions on Evolutionary Computation and has been in the organizing team of several conferences, e.g., IEEE CEC (Program co-chair 2021), SSCI (MCDM co-chair 2020-23), ACM GECCO (RWACMO workshop co-chair 2018-21). More details of his research and professional activities can be found at his website.

Evolutionary Computation meets Machine Learning for Combinatorial Optimisation

Combinatorial optimisation is an important research area with many real-world applications such as scheduling, vehicle routing, cloud resource allocation, supply chain management, logistics and transport. Most combinatorial optimisation problems are NP-hard, making it challenging to design effective algorithms to solve them to optimality. In practice, in particular (meta-)heuristic methods including evolutionary approaches are therefore widely used to address such problems. Unfortunately, designing an effective and efficient (meta-)heuristic typically requires extensive domain expertise and a lot of trial and error for each different problem variant encountered in the real world.

In recent years, machine learning has emerged to also be a promising ingredient for better and/or easier solving combinatorial optimisation problems. First, machine learning can design combinatorial optimisation algorithms automatically by searching for algorithms/heuristics rather than solutions, and the learned algorithms/heuristics can be generalised to future unseen problem variants to obtain high-quality solutions. This can greatly reduce the dependence on human expertise and time to manually design effective algorithms. Second, machine learning can learn decision-making policies for dynamic combinatorial optimisation problems (e.g., dispatching rules for dynamic scheduling), which can achieve both effectiveness and efficiency simultaneously. Third, machine learning may discover new design patterns and knowledge that can further improve the algorithm design for solving complex combinatorial optimisation problems.

The aim of this tutorial is two-fold. On the one hand, we will give an overview on how classical metaheuristics may profit from the usage of machine learning and provide a few advanced examples. On the other hand, we will introduce how evolutionary machine learning, specifically, can be used for solving combinatorial optimisation problems, including basic design issues and some case studies.

The outline of the tutorial is as follows.

1. Introduction and Background
2. Evolutionary Computation to Learn Combinatorial Optimisation Heuristics
3. Machine Learning to Learn Metaheuristics
4. Challenges and Future Directions

Yi Mei

Yi Mei is an Associate Professor at the School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. He received his BSc and PhD degrees from University of Science and Technology of China in 2005 and 2010, respectively. His research interests include evolutionary computation and learning in scheduling and combinatorial optimisation, hyper-heuristics, genetic programming, automatic algorithm design, explainable AI, etc. Yi has more than 200 fully refereed publications, including the top journals in EC and Operations Research (OR) such as IEEE TEVC, IEEE Transactions on Cybernetics, European Journal of Operational Research, ACM Transactions on Mathematical Software, and top EC conferences (GECCO). He won an IEEE Transactions on Evolutionary Computation Outstanding Paper Award 2017, and a Victoria University of Wellington Early Research Excellence Award 2018. As the sole investigator, he won the 2nd prize of the Competition at IEEE WCCI 2014: Optimisation of Problems with Multiple Interdependent Components. He serves as a Vice-Chair of the IEEE CIS Emergent Technologies Technical Committee, a member of three IEEE CIS Task Forces and two IEEE CIS Technical Committees. He is an Associated Editor of IEEE Transactions on Evolutionary Computation, an Editorial Board Member/Associate Editor of other four international journals, and a guest editor of a special issue of the Genetic Programming Evolvable Machine journal. He was an Outstanding Reviewer for Applied Soft Computing in 2015 and 2017, and IEEE Transactions on Cybernetics in 2018. He is a Fellow of Engineering New Zealand, ACM Member and IEEE Senior Member.

Günther Raidl

Günther Raidl is Professor at the Institute of Logic and Computation, TU Wien, Austria, and member of the Algorithms and Complexity Group. He received his PhD in 1994 and completed his habilitation in Practical Computer Science in 2003 at TU Wien. In 2005 he received a professorship position for combinatorial optimization at TU Wien.

His research interests include algorithms and data structures in general and combinatorial optimization in particular, with a specific focus on metaheuristics, mathematical programming, intelligent search methods, and hybrid optimization approaches. His research work typically combines theory and practice for application areas such as scheduling, network design, transport optimization, logistics, and cutting and packing.

Günther Raidl is associate editor for the INFORMS Journal on Computing, the ACM Transactions on Evolutionary Learning and Optimization and at the editorial board of several journals including Algorithms, Engineering Applications of Artificial Intelligence, and Metaheuristics. He is co-founder and steering committee member of the annual European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP). Since 2016 he is also founding faculty member of the Vienna Graduate School on Computational Optimization.

Günther Raidl has co-authored a text book on hybrid metaheuristics and over 190 reviewed articles in scientific journals, books, and conference proceedings. Moreover, he has co-edited 13 books and co-authored one book on hybrid metaheuristics. More information can be found at http://www.ac.tuwien.ac.at/raidl.

Evolutionary Machine Learning for Interpretable and eXplainable AI

Modern learning systems have demonstrated excellent capabilities in addressing complex real-world problems. Nonetheless, they also present two major challenges: the complexity and the black-box nature of their models. First, the majority of these systems are excessively complex such that only domain experts can develop and efficiently use these systems. Consequently, cost-effective and efficient systems cannot be created for a wide range of businesses. Second, the majority of these systems act as a ``black box'' and their decision-making process is neither interpretable nor explainable. However, the interpretability of decisions is critical in many real-world domains such as defense, biomedical, and legal matters.

Automated machine learning (AutoML) is a cutting-edge approach to automate the process of applying machine learning to solve real-world problems. It automatically creates an end-to-end machine-learning pipeline that consists of essential stages such as data preprocessing, feature engineering, model selection, and hyperparameter tuning. AutoML enables non-experts to develop, train, and deploy machine learning models for a variety of applications ranging from healthcare and finance to manufacturing and retail. Evolutionary rule-based machine learning (ERBML) stands out for its ability to provide interpretable decisions. The majority of ERBML systems generate niche-based solutions, require less memory, and can be trained using small data sets. A key factor that makes these models interpretable is the generation of human-readable rules. Consequently, the decision-making process of the ERBML systems is interpretable, which is an important step toward eXplainable AI (XAI).

This tutorial serves a dual purpose. First, it offers a gentle introduction to AutoML, covering the essential stages of the end-to-end pipeline, and highlighting its applications. It also includes a walk-through of a simple transparent, end-to-end AutoML pipeline that demonstrates data analysis and algorithm comparison. Second, the tutorial delves into ERBML concepts and implementations to demonstrate how they can be ideal for current applications, especially in the age of interpretable and eXplainable AI. Furthermore, it provides a hands-on experience with one of the state-of-the-art ERBML systems (ExSTraCS) to solve a real-world bioinformatics problem, encompassing the interpretation and statistical analysis of the experimental results.

Abubakar Siddique

Dr. Siddique's main research lies in creating novel machine learning systems, inspired by the principles of cognitive neuroscience, to provide efficient and scalable solutions for challenging and complex problems in different domains, such as Boolean, computer vision, navigation, and Bioinformatics. He has shared his expertise by delivering five tutorials and talks at various forums, including the Genetic and Evolutionary Computation Conference (GECCO). Additionally, he serves the academic community as an author for prestigious journals and international conferences, including IEEE Transactions on Cybernetics, IEEE Transactions on Evolutionary Computation, and GECCO.

During his academic journey, Dr. Siddique received the "Student Of The Session" Award, the VUWSA Gold Award, and the "Emerging Research Excellence" Medal. Prior to joining academia, he spent nine years at Elixir Technologies Pakistan, a California (USA) based leading software company. His last designation was a Principal Software Engineer where he led a team of software developers. He developed enterprise-level software for customers such as Xerox, IBM, and Adobe.

Will N. Browne

Prof. Will Browne's research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and the co-chair for the Evolutionary Machine Learning track at the Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning and Advanced Learning Classifier Systems at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs), and lectured graduate courses on LCSs. He has co-authored the first textbook on LCSs Introduction to Learning Classifier Systems, Springer 2017. Currently, he is Professor and Chair in Manufacturing Robotics at Queensland University of Technology, Brisbane, Queensland, Australia.

Ryan Urbanowicz

Dr. Ryan Urbanowicz is an Assistant Professor of Computational Biomedicine at the Cedars Sinai Medical Center. His research focuses on the development of machine learning, artificial intelligence automation, data mining, and informatics methodologies as well as their application to biomedical and clinical data analyses. This work is driven by the challenges presented by large-scale data, complex patterns of association (e.g. epistasis and genetic heterogeneity), data integration, and the essential demand for interpretability, reproducibility, and efficiency in machine learning. His research group has developed a number of machine learning software packages including ReBATE, GAMETES, ExSTraCS, STREAMLINE, and FIBERS. He has been a regular contributor to GECCO since 2009 having (1) provided tutorials on learning classifier systems and the application of evolutionary algorithms to biomedical data analysis, (2) co-chaired the International Workshop on Learning Classifier Systems and a workshop on benchmarking evolutionary algorithms, and (3) co-chaired various tracks. He is also an invested educator, with dozens of educational videos and lectures available on his YouTube channel, and co-author of the textbook, `Introduction to Learning Classifier Systems'.

Evolutionary Multiobjective Optimization (EMO)

Evolutionary Multiobjective Optimization (EMO) is the commonly used term for the study and development of evolutionary algorithms to tackle optimization problems with at least two conflicting optimization objectives. The first methods were proposed in the 1980s, and the field has gradually emerged as one of the most innovative and popular areas of evolutionary computation, with a reach extending far beyond its niche beginnings. Today, EMO methods are frequently developed and adopted by researchers from other areas of optimization and decision making, and are put to use in a wealth of applications. This tutorial will be a fresh look at the current state of EMO suitable for those new to the field and those who are experienced but wish to keep up-to-date with a selected tour of the latest ideas, theory, and applications. We will begin with a gentle introduction to the fundamental ideas, but will neglect a comprehensive history in order to spend more time on the most surprising and most secure results, and interesting research lines that still need further deep exploration. We are likely to include
1. Why multiobjective? There are several motivating reasons often neglected
2. Why evolutionary? We will re-examine the usual population-based justification
3. Elitism and archiving, from basics to the latest synthesis of theoretical results
4. NSGA-II: reflections on a behemoth, and new theoretical results
5. Performance assessment and benchmarking best practices
6. How many objectives? Why decreasing and increasing number of objectives can both work (surveying objective reduction and multi-objectivization)
7. Decomposition and cooperative problem solving
8. When and how to include the elusive decision maker (DM), including visualization, and replacing the human decision maker with machines
9. Asynchronous EMO methods (for objectives of differing latency)
10. Tuning and automatic design of EMO algorithms
The tutorial will not be a comprehensive survey of applications, but selected interesting applications will serve to reflect on the challenges in the use of EMO in practice.

Joshua Knowles

Joshua Knowles is a Principal Research Scientist at Schlumberger Cambridge Research, a UK subsidiary of the energy technology company, SLB. He holds visiting academic positions as Honorary Professor at the Alliance Manchester Business School, The University of Manchester, and as Honorary Senior Fellow at the School of Computer Science, University of Birmingham. Joshua works mainly on evolutionary multiobjective optimization, the subject of his 2002 PhD thesis, on both core and applied topics. On core topics, his work includes contributions to prominent EMO algorithms (PAES and ParEGO), early archiving algorithms and theory including influential work on hypervolume maximization, EMO performance assessment techniques, multiobjective machine learning methods (MOCK and variants), statistical benchmarking of interactive EMO methods via machine decision makers, and the transformation of single-objective problems to multiobjective form ('multiobjectivization') to aid search. His applied work is varied across disciplines and has been published in top journals in astrophysics, computational chemistry, analytical chemistry, systems and synthetic biology, proteins and nucleic acids, theoretical biology, telecommunications, operations research, and others. Prior to joining SLB, he worked at Invenia Labs on energy trading algorithms to reduce spiking and congestion on the US electricity grid. At SLB, his research is focused on the use of multiobjective methods to support development of new energy technologies, particularly in the automation of sequential experiments.

Joshua was a co-recipient of the 2003 IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his 2001 work with David Corne on convergence results for EMO archivers. He was recipient of the same award in 2008 as sole author of the 2006 paper introducing ParEGO. He also received the 2017 ACM SIGEVO Impact Award with David Corne for their 2007 paper on many-objective optimization. Joshua serves on the Steering Committee of the EMO International Conference and will be the General Co-Chair of EMO 2025.

Weijie Zheng

Weijie Zheng is an assistant professor at Harbin Institute of Technology, Shenzhen, China. He received his bachelor degree (2013) in Mathematics and Applied Mathematics from Harbin Institute of Technology, Harbin, Heilongjiang, China, and doctoral degree (2018) in Computer Science and Technology from Tsinghua University, Beijing, China. From 2019 to 2021, he was a postdoc researcher at Southern University of Science and Technology and University of Science and Technology of China. From 2021 to 2022, he was a research assistant professor at Southern University of Science and Technology. Since 2022, he has been an assistant professor at Harbin Institute of Technology, Shenzhen.

His current research majorly focuses on the theoretical analysis and design of evolutionary algorithms, such as NSGA-II, binary differential evolution, and estimation-of-distribution algorithms. He has obtained a best paper nomination at GECCO 2018, and co-organized the theory of randomized search heuristics (ThRaSH) seminars Winter 2021 and Spring 2022.

Evolutionary Reinforcement Learning

Many significant and headline breakthroughs in AI over the past decade have been powered in part through Reinforcement Learning (RL). This includes beating human-level performance in video games of various complexity from Atari (Mnih et al. 2013; Nature) to StarCraft (Vinyals et al. 2019; Nature) and strategy games like Chess and Go (Silver et al. 2018; Science). It also demonstrates state-of-the-art performance in real-world applications such as control for quadruped robots (Lee et al., 2020; Nature), high-speed drones (Kaufman et al. 2023; Nature) and plasma reactors (Degrave et al. 2022; Nature). Even modern instruction following language models and chatbots like ChatGPT (Christiano et al. 2017) are possible in part due to RL.

Interestingly, elements of Evolutionary Computation (EC) such as population-based training (Jaderberg et al. 2017) and Quality-Diversity algorithms (Pugh et al. 2016) represent key elements in some of these breakthroughs. The combination of EC and RL methods is not new and has gained more popularity and interest recently as researchers discover the limitations of the individual approaches, opening up many exciting new research avenues and opportunities.

This tutorial will give an overview of the various synergies and questions addressed when combining EC and RL methods, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges and future opportunities at the intersection of EC and RL, will be presented. The tutorial will in particular focus on:
- What is RL?
- Deep learning for RL
- Neuroevolution for RL
- Meta-learning RL with Evolution
- Quality-Diversity for RL
- Curriculum Learning and Environment Generation using EC and RL
- Other combinations of EC and RL
- Open questions and future challenges

The tutorial will effectively complement the Complex Systems, Neuroevolution, Evolutionary Machine Learning, Genetic Algorithms and Real-World Applications tracks, each of which respectively contains several papers about RL. For instance, 13/180 of the papers accepted in the GECCO in 2023, and 9/158 in 2022 contained RL elements.

Manon Flageat

Manon Flageat is a final-year PhD candidate at Imperial College London (United Kingdom). Her research focuses on Quality-Diversity algorithms, in particular applied to uncertain environments, as well as Deep Reinforcement Learning and synergies between these two types of learning algorithms. She is also a Teaching Scholar in the Department of Computing, Imperial College London and has been Course organizer of the Reinforcement Learning lecture for three years in a row. She has co-authored papers in journals such as IEEE Transaction on Evolutionary Computation and ACM Transactions on Evolutionary Learning and Optimization, and in venues such as GECCO and ALife. She received the best paper awkward from GECCO 2023 for work on mixing Quality-Diversity with Reinforcement Learning. She was also keynote speaker in the EvoRL workshop in GECCO 2023.

 

Bryan Lim

Bryan is a final-year PhD candidate in the Adaptive and Intelligent Robotics Lab at Imperial College London. His research focuses on open-ended learning systems which can continuously generate a diversity of interesting problems and corresponding novel solutions, with the potential to lead to increasingly intelligent, creative and general-purpose AI systems. To enable such open-ended systems, he works on increasing the efficiency and scalability of Quality-Diversity algorithms, which encourages novelty and diversity to enable more creative search processes. Bryan’s work is at the intersection of reinforcement learning, robotics and evolutionary computation and he has co-authored papers in venues such as ICLR, ICRA, GECCO, ALIFE and TMLR. Previously, he spent his MEng year abroad at MIT and has an undergraduate degree in Mechanical Engineering from Imperial College London.

Antoine Cully

Antoine Cully is Lecturer (Assistant Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Sorbonne Université in Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015), received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), and two Best-Paper awards from GECCO (2021, 2022).

Generative Hyper-heuristics

The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce generative hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate
how to mine existing algorithms to obtain algorithmic primitives for the generative hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.

This tutorial will place generative hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).

The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

This leads to a technique for mass-producing algorithms that can be customized to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a traveling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).

This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.

The field has moved on since this tutorial was started and a number of review articles and frameworks have emerged. A brief overview of these publications will be given.

Daniel Tauritz

Daniel R. Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), the Director for National Laboratory Relationships in AU's Samuel Ginn College of Engineering, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), the founding director of AU’s Biomimetic National Security Artificial Intelligence Laboratory (BONSAI Lab), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics, competitive coevolution, and parameter control, and the application of computational intelligence techniques in security and defense. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

John Woodward

John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

Instance Space Analysis and Item Response Theory for Algorithm Testing

Standard practice in algorithm testing consists of reporting performance on-average across a suite of well-studied benchmark instances. Therefore, the conclusions drawn during this process critically depend on the choice of benchmarks and comparison algorithms. Ideally, benchmark suites are unbiased, challenging, and contain a mix of synthetically generated and real-world-like instances with diverse structural properties. Without this diversity, the conclusions that can be drawn about the expected algorithm performance in future scenarios are necessarily limited. Moreover, reporting performance on-average often highlights the strengths, while disguising the weaknesses, of an algorithm. In other words, there are two limitations to the standard benchmarking approach that affect the conclusions that can be drawn: (a) there is no mechanism to assess whether the selected test instances are unbiased and diverse enough to support the conclusions drawn; and (b) there is little opportunity to gain insights into the strengths and weaknesses of algorithms for different types of instances when hidden by on-average performance metrics.

This tutorial introduces Instance Space Analysis (ISA), and Algorithm evaluation using Item Response Theory (AIRT), two complementary methodologies for evaluating algorithm performance and benchmark diversity. ISA aims to improve the way algorithms are evaluated by revealing relationships between the structural properties of problem instances and their impact on the performance of algorithms. ISA offers a more nuanced opportunity to gain insights into algorithm strengths and weaknesses for various types of test instances, and to objectively assess the relative power of algorithms, free from any bias introduced by the choice of test instances. ISA constructs an instance space whereby test instances can be visualised as points in a 2d plane, with algorithm footprints identified as the regions of predicted good performance of an algorithm, based on statistical evidence from empirical testing.

AIRT is based on Item Response Theory (IRT), which uses latent trait models from psychometrics that can uncover hidden qualities such as stress-proneness. In a traditional setting IRT has causal interpretations, which can be transferred to the algorithm evaluation domain, obtaining metrics such as algorithm consistency, anomalous indicator and the difficulty limit. In addition to these metrics, AIRT can help visualise the space of problems organised by difficulty.

The tutorial makes use of the on-line tools available at the Melbourne Algorithm Test Instance Library with Data Analytics (MATILDA), provides access to its MATLAB computational engine, and the R package airt. MATILDA also provides a collection of ISA results and other meta-data available for downloading for several well-studied problems from optimisation and machine learning, from previously published studies.

Kate Smith-Miles

Kate Smith-Miles is a Melbourne Laureate Professor of Applied Mathematics in the School of Mathematics and Statistics at The University of Melbourne, and Director of the ARC Industrial Transformation Training Centre for Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA). She is also Associate Dean (Enterprise and Innovation) for the Faculty of Science at The University of Melbourne. Prior to joining The University of Melbourne in September 2017, she was Professor of Applied Mathematics at Monash University, Head of the School of Mathematical Sciences (2009-2014), and inaugural Director of the Monash Academy for Cross & Interdisciplinary Mathematical Applications (MAXIMA) from 2013-2017. Previous roles include President of the Australian Mathematical Society (2016-2018), and membership of the Australian Research Council College of Experts (2017-2019). Kate obtained a B.Sc(Hons) in Mathematics and a Ph.D. in Electrical Engineering, both from The University of Melbourne. Commencing her academic career in 1996, she has published 2 books on neural networks and data mining, and over 280 refereed journal and international conference papers in the areas of neural networks, optimisation, data mining, and various applied mathematics topics. She has supervised 30 PhD students to completion, and has been awarded over AUD$20 million in competitive grants, including 13 Australian Research Council grants and industry awards. She was awarded a Georgina Sweet Australian Laureate Fellowship from the Australian Research Council (2014-2020), enabling her Instance Space Analysis methodology to be expanded into an online tool (MATILDA, Melbourne Algorithm Test Instance Library with Data Analytics). Kate was elected a Fellow of the Australian Academy of Science in 2022, a Fellow of the Institute of Engineers Australia (FIEAust) in 2006, and a Fellow of the Australian Mathematical Society (FAustMS) in 2008. Awards include: the Australian Mathematical Society Medal in 2010 for distinguished research; the EO Tuck Medal from ANZIAM in 2017 for outstanding research and distinguished service; the Ren Potts Medal for outstanding research in the theory and practice of operations research from the Australian Society for Operations Research (ASOR) in 2019; and the Monash University Vice-Chancellor’s Award for Excellence in Postgraduate Supervision in 2012. In addition to her academic activities, she also regularly act as a consultant to industry in the areas of optimisation, data mining, and intelligent systems. She is also actively involved in mentoring, particularly with the aim of encouraging greater female participation in mathematics, and she chairs the Advisory Board for the AMSI Choose Maths program.

Mario Andrés Muñoz

Mario Andrés Muñoz is a Research Fellow at the School of Computer and Information Systems, The University of Melbourne; and the ARC Training Centre in Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA). He received the B.Eng. and M.Eng. degrees in Electronics Engineering from Universidad del Valle, Colombia, in 2005 and 2008 respectively, and the Ph.D. degree in Engineering from The University of Melbourne, Australia, in 2014. His research interests focus on the application of optimisation, computational intelligence, signal processing, data analysis, and machine learning methods to ill-defined science, engineering and medicine problems.

Sevvandi Kandanaarachchi

Sevvandi is a Senior Research Scientist at CSIRO's Data61. She works on statistical machine learning research problems, including algorithm evaluation. She is an interdisciplinary researcher with an applied mathematics background. Prior to joining CSIRO, Sevvandi was a Lecturer at the Mathematical Sciences Department in RMIT University.

Sevvandi obtained a B.Sc.Eng. from the University of Moratuwa, Sri Lanka, with first class Honours. She completed her PhD from Monash University, Australia, in 2011 in Mathematics. In 2015, she completed a Graduate Certificate in Data Mining and Applications from Stanford. She currently supervises 1 PhD and 1 Masters by Research student and has been awarded AUD\$500,000 in research funding. She developed and maintains R repository \textit{airt} on CRAN, the Comprehensive R Archive Network.

Introduction to Quantum Optimization

Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The algorithms that quantum computers use for optimization can be regarded as general types of stochastic heuristic optimization algorithms.

There are two main types of quantum computers, quantum annealers and quantum gate computers. These have very different architectures. To solve optimization problems on quantum computers, they need to be reformulated in a format suitable for the specific architecture. Quantum annealers are specially tailored to solve combinatorial optimization problems, once they are reformulated as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. In quantum gates computers, Quantum Approximate Optimization Algorithm (QAOA) can be used to approximately solve optimization problems. In this case, a classical algorithm can be used on top of the quantum computer to guide the search of parameters.

The tutorial is aimed at researchers in optimization who have no previous knowledge of quantum computers and want to learn about how to solve optimization problems on quantum computers. The tutorial will demonstrate how to solve in practice a simple combinatorial optimization problem on the two main quantum computer architectures (quantum gate computers and quantum annealers) using Jupyter notebooks to experiment hands-on on solving simple optimization problems on quantum computer simulators.

Content:

Part 1 (Quantum Annealers):
- Quantum Annealers Background
- Optimisation on Quantum Annealers
- Solving the Max Cut problem on a Quantum Annealer
Part 2 (Quantum Gate Computers):
- Quantum Gate Computers Background
- Optimisation on Quantum Gate Computers
- Solving the Max Cut problem on a Quantum Gate Computer (via QAOA)

Alberto Moraglio

Alberto Moraglio is a Senior Lecturer at the University of Exeter, UK. He holds a PhD in Computer Science from the University of Essex and Master and Bachelor degrees (Laurea) in Computer Engineering from the Polytechnic University of Turin, Italy. He is the founder of a Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design and rigorous theoretical analysis of new successful search algorithms. He gave several tutorials at GECCO, IEEE CEC and PPSN, and has an extensive publication record on this subject. He has served as co-chair for the GP track, the GA track and the Theory track at GECCO. He also co-chaired twice the European Conference on Genetic Programming, and is an associate editor of Genetic Programming and Evolvable Machines journal. He has applied his geometric theory to derive a new form of Genetic Programming based on semantics with appealing theoretical properties which is rapidly gaining popularity in the GP community. In the last three years, Alberto has been collaborating with Fujitsu Laboratories on Optimisation on Quantum Annealing machines. He has formulated dozens of Combinatorial Optimisation problems in a format suitable for the Quantum hardware. He is also the inventor of a software (a compiler) aimed at making these machines usable without specific expertise by automating the translation of high-level description of combinatorial optimisation problems to a low-level format suitable for the Quantum hardware (patented invention).

Francisco Chicano

Francisco Chicano holds a PhD in Computer Science from the University of Málaga and a Degree in Physics from the National Distance Education University. Since 2008 he is with the Department of Languages and Computing Sciences of the University of Málaga. His research interests include quantum computing, the application of search techniques to Software Engineering problems and the use of theoretical results to efficiently solve combinatorial optimization problems. He is in the editorial board of Evolutionary Computation Journal, Engineering Applications of Artificial Intelligence, Journal of Systems and Software, ACM Transactions on Evolutionary Learning and Optimization and Mathematical Problems in Engineering. He has also been programme chair and Editor-in-Chief in international events.

Landscape Analysis of Optimization Problems and Algorithms

Gabriela Ochoa

Gabriela Ochoa is a Professor of Computing Science at the University of Stirling in Scotland, UK. Her research lies in the foundations and applications of evolutionary algorithms and metaheuristics, with emphasis on adaptive search, fitness landscape analysis and visualisation. She holds a PhD from the University of Sussex, UK, and has worked at the University Simon Bolivar, Venezuela, and the University of Nottingham, UK. Her Google Scholar h-index is 40, and her work on network-based models of computational search spans several domains and has obtained 4 best-paper awards and 8 other nominations. She collaborates cross-disciplines to apply evolutionary computation in healthcare and conservation. She has been active in organisation and editorial roles in venues such as the Genetic and Evolutionary Computation Conference (GECCO), Parallel Problem Solving from Nature (PPSN), the Evolutionary Computation Journal (ECJ) and the ACM Transactions on Evolutionary Learning and Optimisation (TELO). She is a member of the executive board for the ACM interest group in evolutionary computation, SIGEVO, and the editor of the SIGEVOlution newsletter. In 2020, she was recognised by the leading European event on bio-inspired algorithms, EvoStar, for her outstanding contributions to the field.

Katherine Malan

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her PhD in computer science from the University of Pretoria in 2014 and her MSc & BSc degrees from the University of Cape Town. She has over 25 years' lecturing experience, mostly in Computer Science, at three different South African universities. Her research interests include automated algorithm selection in optimisation and learning, fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is editor-in-chief of South African Computer Journal, associate editor for Engineering Applications of Artificial Intelligence, and has served as a reviewer for over 20 Web of Science journals.

Linear Genetic Programming

Linear genetic programming (LGP) is a flavor of genetic programming that adopts linear representations for executable programs. These linear representations can consist of a sequence of imperative instructions, offering powerful capabilities for representing complex relations in a compact manner. Linear representations also conveniently facilitate the detection of structurally non-effective code, i.e., instructions whose execution do not alter the program’s outcome. This allows for the study of code neutrality, where mutations to program elements does not change the relationship represented by the program. Since its conception, LGP has witnessed exciting advancements in methodology design, real-world applications, and the study of evolutionary theory. This tutorial introduces the fundamentals of LGP, its various representations, genetic variation operators, and provide application examples. It also explores the utility of LGP in studying evolvability and robustness, intriguing properties of evolutionary systems resulting from neutrality.

Wolfgang Banzhaf

Wolfgang Banzhaf is the John R. Koza Chair for Genetic Programming in the Department of Computer Science and Engineering at Michigan State University. He received his Dr.rer.nat (PhD) from the Department of Physics of the Technische Hochschule Karlsruhe, now Karlsruhe Institute of Technology (KIT). His research interests are evolutionary computing, complex adaptive systems, and self-organization of artificial life. He is a member of the Advisory Committee of ACM-SIGEVO, the Special Interest Group for Evolutionary Computation of the Association of Computing Machinery and has served as its Chair from 2011-2015 after having served as SIGEVO's treasurer 2005-2011. From its foundation, he was member of the Executive Board of SIGEVO from 2005-2021, and of the International Society for Artificial Life (ISAL) from 2009 to 2015, and from 2019 to today. He has founded the scholarly journal "Genetic Programming and Evolvable Machines".

Ting Hu

Ting Hu is an Associate Professor at the School of Computing, Queen's University in Kingston, Canada. She received her PhD in Computer Science from Memorial University in St. John's, Canada and completed her postdoctoral training in bioinformatics from Dartmouth College in Hanover, New Hampshire, USA. Her research focuses on evolutionary algorithm methodology and its applications in biomedicine, and recently on explainable AI and interpretable machine learning. Ting is an Area Editor of the journal Genetic Programming and Evolvable Machines and an Associate Editor of the journal Neurocomputing. Ting has served as program co-chairs for EuroGP and GECCO-GP track.

Model-Based Evolutionary Algorithms

In model-based evolutionary algorithms (MBEAs) the variation operators are guided by the use of a model that conveys problem-specific information so as to increase the chances that combining the currently available solutions leads to improved solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process.
Replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and subsequent exploitation of these regularities, thereby enabling the design of optimization techniques that can automatically adapt to a given problem. This is an especially useful feature when considering optimization in a black-box setting. The use of models can furthermore also have major implications for grey-box settings where not everything about the problem is considered to be unknown a priori.

Well-known types of MBEAs are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and samples are subsequently drawn from these models to generate new solutions.
A more recent class of MBEAs is the family of Optimal Mixing EAs such as the Linkage Tree GA and, more generally, various GOMEA variants. The tutorial will mainly focus on the latter types of MBEAs.

 

Dirk Thierens

Dr. Dirk Thierens is a lecturer/senior researcher at the Department of Information and Computing Sciences at Utrecht University, where he is teaching courses on Evolutionary Computation and Computational Intelligence. He has (co)-authored over 100 peer reviewed papers in Evolutionary Computation. His main current research interests are focused on the design and application of structure learning techniques in the framework of population-based, stochastic search. Dirk contributed to the organization of previous GECCO conferences as track chair, workshop organizer, Editor-in-Chief, and past member of the SIGEVO ACM board.

Peter A. N. Bosman

Peter Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter obtained both his MSc and PhD degrees on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 150 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair, late-breaking-papers chair, (co-)workshop organizer, (co-)local chair (2013) and general chair (2017).

New Framework of Multi-Objective Evolutionary Algorithms with Unbounded External Archive

In the field of evolutionary multi-objective optimization (EMO), early EMO algorithms in the 1990s are called non-elitist algorithms where no solutions in the current population are included in the next population. That is, the next population is the offspring population of the current population. This non-elitist algorithm framework is clearly inefficient since we cannot preserve good solutions during the execution of EMO algorithms. As a result, almost all EMO algorithms in the last two decades are based on the elitist framework where the next population is selected from the current population and its offspring population. In both frameworks, the final population is presented to the decision maker as the final output from EMO algorithms. Recently, some potential difficulties of the elitist framework have been pointed out. One is that the final population is not always the best subset of all the examined solutions. It was demonstrated in the literature that some solutions in the final population are dominated by other solutions generated and deleted in previous generations. It is also difficult to utilize solutions in previous generations to generate new solutions. Offspring are always generated from solutions in the current population. Another difficulty is that only a limited number of solutions (i.e., only solutions in the final population) are obtained. A new framework with an unbounded external archive can easily handle these difficulties since the final solution set is selected from all the examined solutions. In this framework, we can select an arbitrary number of solutions as the final output from EMO algorithms. Stored solutions in the external archive can be used to create new solutions and also to select solutions for the next population. In this tutorial, some interesting research issues in the new EMO algorithm framework are explained.

Hisao Ishibuchi

Hisao Ishibuchi is a Chair Professor at Southern University of Science and Technology, China. He was the IEEE Computational Intelligence Society (CIS) Vice-President for Technical Activities in 2010-2013 and the Editor-in-Chief of the IEEE Computational Intelligence Magazine in 2014-2019. Currently he is an IEEE CIS Administrative Committee Member (2014-2019, 2021-2023), an IEEE CIS Distinguished Lecturer (2015-2017, 2021-2023), and an Associate Editor of several journals such as IEEE Trans. on Evolutionary Computation, IEEE Trans. on Cybernetics, and IEEE Access. He is also General Chair of IEEE WCCI 2014. His research on evolutionary multi-objective optimization received an Outstanding Paper Award from IEEE Trans. on Evolutionary Computation in 2020, and Best Paper Awards from GECCO 2004, 2017, 2018, 2020, 2021 and EMO 2019.

Lie Meng Pang

Lie Meng Pang received her Bachelor of Engineering degree in Electronic and Telecommunication Engineering and Ph.D. degree in Electronic Engineering from the Faculty of Engineering, Universiti Malaysia Sarawak, Malaysia, in 2012 and 2018, respectively. She is currently a research associate with the Department of Computer Science and Engineering, Southern University of Science and Technology (SUSTech), China. Her current research interests include evolutionary multi-objective optimization and fuzzy systems.

Ke Shang

Ke Shang received the B.S. and Ph.D. degrees from Xi'an Jiaotong University, China, in 2009 and 2016, respectively. He is currently a research associate professor at Southern University of Science and Technology, China. His current research interests include multi-objective optimization and artificial intelligence. He received GECCO 2018 Best Paper Award, CEC 2019 First Runner-up Conference Paper Award, GECCO 2021 Best Paper Award and best paper nomination at PPSN 2020.

Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on the performance of evolutionary algorithms. Relevant concepts are the locality and redundancy of representations. Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

The tutorial gives an overview about existing guidelines for representation design, illustrates different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively. Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 100 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". At the University Mainz, he is Academic Director of the Executive MBA program (since 2013) and Chief Information Officer (since 2016). His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ), ACM Transactions on Evolutionary Learning and Optimization (TELO), and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He was treasurer of ACM SIGEVO between 2011 and 2019. Since 2019, he serves as chair for ACM SIGEVO. He has been organizer of a number of workshops and tracks on heuristic optimization, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.

Robot Evolution: from Virtual to Real

This tutorial outlines the what, the why and the how of robot evolution, reflecting on the developments from the publication of the seminal Nolfi-Floreano book in 2000. It focuses on the morphology-controller (body-brain) dichotomy, the role of individual (lifetime) learning after ‘birth’ and the challenges and opportunities of implementing robot evolution in the real world. In particular, it handles the basics of robot evolution (in simulations), the evolution of controllers for fixed morphologies, the simultaneous evolution of morphologies and controllers (research line starting with Sims), and the simultaneous evolution of morphologies and controllers with additional lifetime learning. To this end, it elaborates on the Triangle of Life system model to underpin the latter type of systems and compares it with alternative approaches. Furthermore, real-world robot evolution is discussed as the ultimate form of evolutionary robotics with unprecedented challenges, including ethical and safety concerns. Related research projects and visionary concepts from the literature are reviewed and a list of issues is presented for future research to make progress towards the long-term vision.

Gusz Eiben

Prof.Dr. A.E. Eiben is full professor of Artificial Intelligence on the Vrije Universiteit Amsterdam and visiting professor at the University of York. He is an internationally renowned expert in evolutionary computing who literally wrote the book (Eiben-Smith, Introduction to Evolutionary Computing, Springer, 2003, 2007, 2015). He is specialized in Evolutionary Robotics with papers in Nature, Nature Machine Intelligence, and Science Robotics. He is the head of the VUA’s Bio-Inspired Robotics lab, specialty chief editor of the Frontiers in AI and Robotics and editorial board member of several other journals. He has supervised several PhD theses on evolutionary robotics and collaborated in various international (EU and/or UK) research project in the area.

Runtime Analysis of Population-based Evolutionary Algorithms

Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter setting sand the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes.

We begin with a brief overview of the generational, non-elitist population-based EAs that are covered by the techniques. We recall the common selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level-based analyses. To illustrate how these techniques can be applied, we consider several fundamental questions: What characteristics of fitness landscapes (such as local optima and fitness valleys) determine their time-complexity. When are populations necessary for efficient optimisation with EAs? What determines an EA's tolerance for uncertainty, e.g. in form of noisy or partially available fitness? When do non-elitism outperform elitist EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates?

Per Kristian Lehre

Dr Lehre is a Professor in the School of Computer Science at the
University of Birmingham (since Jan 2017). Before joining Birmingham,
he was since 2011 Assistant Professor with the University of
Nottingham. He obtained MSc and PhD degrees in Computer Science from
the Norwegian University of Science and Technology (NTNU) in
Trondheim, He completed the PhD in 2006 under the supervision of Prof
Pauline Haddow, and joined the School of Computer Science at The
University of Birmingham, UK, as a Research Fellow in January 2007
with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics,
Technical University of Denmark in Lyngby, Denmark from April 2010.

Dr Lehre's research interests are in theoretical aspects of
nature-inspired search heuristics, in particular runtime analysis of
population-based evolutionary algorithms. His research has won
numerous best paper awards, including GECCO (2013, 2010, 2009, 2006),
ICSTW (2008), and ISAAC (2014). He was vice-chair of IEEE Task Force on
Theoretical Foundations of Bio-inspired Computation, and a member of
the editorial board of Evolutionary Computation and previously associate editor of IEEE Transactions of Evolutionary Computation. He has guest-edited
special issues of Theoretical Computer Science and IEEE Transaction on
Evolutionary Computation on theoretical foundations of evolutionary
computation. Dr Lehre has given many tutorials on evolutionary
computation in summer schools (UK: 2007, 2015, 2016, France: 2009,
2010, and 2011, Estonia: 2010), as well as major conferences and
workshops (GECCO 2013-2023, CEC 2013 and 2015, PPSN 2016, 2018, 2020,
and ThRaSH 2013). He was the main coordinator of the 2M euro
EU-funded project SAGE which brought together theory of evolutionary
computation and population genetics. He is also a Turing AI Acceleration Fellow funded by the UKRI with a project on Runtime Analysis of Co-Evolutionary Algorithms.

Statistical Analyses for Single-objective Stochastic Optimization Algorithms

Moving to the era of explainable AI, a comprehensive comparison of the performance of single-objective stochastic optimization algorithms has become an increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed to acquire relevant and valid conclusions. First of all, the selection of the performance measures should be done with great care since some measures can be correlated and their data is then further involved in statistical analyses. Further, the statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Next, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data.

This tutorial will provide an overview of the current approaches for analyzing algorithms' performance with special emphasis on caveats that are often overlooked. We will show how these can be easily avoided by applying simple principles that lead to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly on examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparison scenarios for single-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework (i.e. DSCTool) for statistical comparison of single-objective stochastic optimization algorithms. In addition, R and Python clients for performing the analyses will be also presented.

Tome Eftimov

Tome Eftimov is a researcher at the Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at the Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for rheumatology concepts extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, and machine learning. He has been involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, and PPSN 2020) or as invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is a coordinator of a national project “Mr-BEC: Modern approaches for benchmarking in evolutionary computation” and actively participates in European projects.

Peter Korošec

Peter Korošec received his Ph.D. degree from the Jo?ef Stefan Postgraduate School, Ljubljana, Slovenia, in 2006. Since 2002, he has been a researcher at the Computer Systems Department, Jo?ef Stefan Institute, Ljubljana. His current areas of research include understanding principles behind meta-heuristic optimization and parallel/distributed computing. He participated in several tutorials related to statistical analysis for optimization algorithms presented in different international conferences and co-organized a workshop on understanding evolutionary optimization behavior.

Theory and Practice of Population Diversity in Evolutionary Computation

Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity.

This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it.

Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration and enhancing the power of crossover. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms.

Dirk Sudholt

Dirk Sudholt is a Full Professor and Chair of Algorithms for Intelligent Systems at the University of Passau, Germany. He previously held a post as Senior Lecturer at the University of Sheffield, UK, and founding head of the Algorithms research group. He obtained his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence. Most relevant to this tutorial is his work on runtime analysis of diversity mechanisms and the benefits of crossover in genetic algorithms. Dirk has served as chair of FOGA 2017, the GECCO Theory track in 2016 and 2017 and as guest editor for Algorithmica. He is a member of the editorial board of Evolutionary Computation and The Computer Journal and associate editor for Natural Computing. He has more than 100 refereed publications and won 9 best paper awards at GECCO and PPSN.

Giovanni Squillero

Giovanni Squillero is an associate professor of computer science at Politecnico di Torino, Department of Control and Computer Engineering. His research mixes computational intelligence and machine learning, with particular emphasis on evolutionary computation, bio-inspired meta-heuristics, and multi-agent systems; in more down-to-earth activities, he studies approximate optimization techniques able to achieve acceptable solutions with reasonable resources. The industrial applications of his work range from electronic CAD to bio-informatics. Up to April 2022, he is credited as an author in 3 books, 36 journal articles, 11 book chapters, and 154 papers in conference proceedings; he is also listed among the editors in 16 volumes.

Squillero has been a Senior Member of the IEEE since 2014; currently, he is serving in the technical committee of the IEEE Computational Intelligence Society Games, and in the editorial board of Genetic Programming and Evolvable Machines. He was the program chair of the European Conference on the Applications of Evolutionary Computation in 2016 and 2017, and he is now a member of the EvoApplications steering committee. In 2018 he co-organized EvoML, the workshop on Evolutionary Machine Learning at PPSN; in 2016 and 2017, MPDEA, the workshop on Measuring and Promoting Diversity in Evolutionary Algorithms at GECCO; and from 2004 to 2014, EvoHOT, the Workshops on Evolutionary Hardware Optimization Techniques.

Since 1998, Squillero lectured 66 university courses (15 Ph.D. and 51 M.S./B.Sc.; 36 in Italian and 30 in English); he contributed to additional 37 courses as an assistant or in other subsidiary roles. He was given the opportunity to present his research in 14 international events among invited talks, seminars and tutorials.

Transfer Learning in Evolutionary Spaces

Evolutionary algorithms have been effectively applied to various search spaces. Traditionally evolutionary algorithms explore a solution space. However, since their inception the application of evolutionary algorithms have been extended to other spaces including the program, heuristic and design spaces. More recently the potential of transfer learning in evolutionary algorithms, focusing predominantly on the solution and program spaces, has been established.

This tutorial examines the use of transfer learning in the application of evolutionary algorithms to four spaces, namely, the solution, program, heuristic (hyper-heuristics) and design (automated design of machine learning and search algorithms) spaces. The tutorial will provide an overview of transfer learning for the four spaces in terms of what to transfer, when to transfer and how to transfer knowledge. A case study for each of the spaces will be presented. This also includes a case study on the use of machine learning. The benefits of transfer learning for each of the four spaces will be highlighted. The challenges associated transfer learning in these evolutionary spaces will also be examined.

Determining what knowledge to transfer, when to transfer the knowledge and how to transfer the knowledge for the different spaces is itself an optimization problem. Traditionally this has been done manually. The tutorial will also look at how this process can be automated. A Python library ATLEA (Automated Transfer Learning for Evolutionary Algorithms) for the automated design of transfer learning in evolutionary algorithms will be presented.

Nelishia Pillay

Nelishia Pillay is a Professor at the University of Pretoria, South Africa. She holds the Multichoice Joint-Chair in Machine Learning and SARChI Chair in Artificial Intelligence for Sustainable Development. She is chair of the IEEE Technical Committee on Intelligent Systems Applications, IEEE CIS WCI sub-commitee and the IEEE Task Force on Automated Algorithm Design, Configuration and Selection. Her research areas include hyper-heuristics, automated design of machine learning and search techniques, combinatorial optimization, genetic programming, genetic algorithms and deep learning for and more generally machine learning and optimization for sustainable development. These are the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.