Neuroevolution: A Pathway to General Intelligence?
Neuroevolution: A Pathway to General Intelligence?
Nurdin Hossain
Thomas Jefferson High School for Science and Technology
This article was the 3rd place winner in the 9th-10th grade division of the Teknos 2020 Summer Writing Competition.
Imagine this - a population of AIs that selectively breed over time to eventually produce an AI that can beat a human in the game “Flappy Bird”. This “imagined” scenario is in fact real and has been performed numerous times with much more complicated tasks. This is an example of an evolutionary algorithm (EA), an algorithm that manipulates a population of potential solutions that start out poor but optimize over time using evolutionary strategies such as mating, mutations, and natural selection [1]. To reach optimal solutions, a population within an EA is guided by a fitness function, which defines an organism’s performance in a particular problem. In the Flappy Bird example, for instance, the fitness function might be the score achieved in the game. Analogous to biology, in which fitness determines an organism’s reproductive success, organisms in an EA with the highest fitnesses are bred (meaning their artificial brains are mathematically “mixed” to create “offspring” that share genetic similarities with their parents) and mutated, while the others are discarded. It is this iterative process of keeping the best and discarding the worst that eventually converges to an optimal solution to the problem.
EAs themselves are part of a broader field known as neuroevolution, which uses EAs to perfect layered, brain-like machine learning models known as neural networks (NNs) [7]. Neuroevolution as a whole is inspired by natural evolution, the only process known to have produced not only human intelligence but entire populations of intelligent creatures [7]. One particular breakthrough in neuroevolution was the NEAT (NeuroEvolution of Augmenting Topologies) algorithm introduced in 2002. What stood out about this algorithm is that it allowed for a population of bare-bones neural networks with absolutely minimal structure to slowly complexify (add structure) and optimize (fine-tune the current structure). This process is loosely analogous to how life started with simple, single-celled organisms and evolved into the multicellular animals we see today [6, 9]. Furthermore, NEAT separated organisms with similar structures and parameters into species, allowing for more diverse solutions to optimize rather than just a few, again analogous to nature and the vast diversity of life it has produced [6]. With these key characteristics and several others, NEAT performed better than the leading fixed-structure neural networks at the time on the cart-pole task, a reinforcement learning task in which an agent must use a moving cart to balance a pole [6] Additionally, NEAT outperformed three other neuroevolutionary algorithms in various Atari games, truly showing its ability to produce intelligent solutions to difficult problems [3].
Since NEAT, neuroevolution has made impressive progress. Many algorithms today still use NEAT as a foundation, serving as evidence of its power. One of these is CPPN-NEAT, an algorithm used to evolve a population of CPPNs (Compositional Pattern-Producing Networks). As the name suggests, CPPNs produce patterns commonly found in nature, such as symmetry, repetition, repetition with variation, and many more [5]. The pattern-producing capabilities of CPPNs and the evolutionary power of NEAT were then used to create PicBreeder, an app that lets you create art and complex geometric figures using evolution. Additionally, CPPNs served as the indirect encoding (analogous to DNA) for another advance in neuroevolution called HyperNEAT [8]. What makes HyperNEAT so special is that its CPPN encoding allows it to evolve its parameters as a function of a problem’s geometry [8]. What this means is that a neural network can actually “see” the problem it is solving, giving it more information to process during evolution [8]. As a whole, the field of neuroevolution has persisted on its journey to the ultimate algorithm - one that can produce diverse and creative innovations without end, a defining characteristic of human intelligence [9].
This brings us to the problem of open-ended evolution, or “open-endedness”, as it has more recently been called. Open-endedness describes a process that generates endless complexity [9]. It is not about solving any particular problem so much as endlessly creating diverse solutions to current problems and creating increasingly complex problems to find additional solutions to [9]. This is where processes like NEAT ultimately fail. Even though NEAT separates its organisms into species to promote diversity, the population will eventually converge to one or a few solutions, and the problem will be solved. NEAT cannot create additional problems to find solutions to, so eventually, there will be no meaningful evolution left to accomplish. To tackle this issue of convergence, an algorithm called “novelty search” was introduced in 2011. Unlike NEAT, which uses a fitness function to guide its population towards optimality, novelty search uses a “novelty metric”, which is maximized when an organism’s behavior differs from past organisms and minimized when it is similar to past organisms [4]. The basic idea is that some problems cannot be solved by setting an objective - that is, sometimes it is more useful to search purely for novel solutions rather than solutions that are considered optimal by some objective (in NEAT, the objective is the fitness function) [4]. One good example of this is a maze. An intuitive fitness function for this is one that is maximized as an agent gets closer to the goal. However, the issue with this is that mazes have walls that form dead ends, twists, and turns that can deceive an agent into thinking it is close to the end when in fact it is nowhere near it. For example, an agent could navigate to a dead end that is next to the endpoint distance-wise but far away path-wise, and because the fitness function promotes a small distance between the agent and goal, evolution will draw more organisms to that same dead-end, ultimately preventing exploration to more favorable parts of the maze. Novelty search circumvents this by promoting pure exploration and rewarding individuals that deviate from the rest of the population [4]. If many individuals visit a certain dead end in the maze, evolution will push future individuals away from that same dead-end, opening up the search to new possibilities.
Although novelty search excels at producing diverse solutions, one issue that shifts it away from true open-endedness is the finite search space it usually operates in [9]. If run for long enough, novelty search could run through every interesting behavior in the space of possible behaviors, making future behaviors uninteresting or useless. However, two recent algorithms have solved this issue by complexifying problems along with their solutions - MCC (Minimal Criterion Coevolution) and POET (Paired Open-Ended Trailblazer). MCC was applied to a maze domain, in which it successfully evolved solutions to different mazes and then complexified those mazes to provide new challenges to the maze-solving agents [2]. POET, on the other hand, was tested in a bipedal walking domain, which contains a controllable locomotive robot that must overcome various environmental obstacles [10]. Similar to MCC, POET was able to evolve the robot to solve the environmental challenges it faced and then increase the complexity of those challenges so additional solutions could be found [10]. Since both algorithms complexify their problems along with their solutions, they could, with sufficient computational resources and enough time, endlessly produce highly complex environments with solutions for each one.
Overall, neuroevolution is a promising field that has and likely will continue to produce compelling innovations. The field has been progressing steadily with algorithms like NEAT, novelty search, MCC, POET, and many more. As a whole, neuroevolution is a much more natural approach to machine learning since it is based on biological evolution, a process that generated the wide range of intelligent creatures we see today. In the future, it is my hope that the field will progress beyond just computer science and become applicable in more abstract and creative areas as proof of its exceptional capabilities.
References
[1] Bhattacharyya, S., Maulik, U., Dutta, P., Samanta, S., Choudhury, A., Dey, N., … Balas, V. E. (2017). Quantum-inspired evolutionary algorithm for scaling factor optimization during manifold medical information embedding. In Quantum inspired computational intelligence: research and applications \ (pp. 285–326). essay, Morgan Kaufmann.
[2] Brant, J. C., & Stanley, K. O. (2017). Minimal criterion coevolution. Proceedings of the Genetic and Evolutionary Computation Conference. https://doi.org/10.1145/3071178.3071186
[3] Hausknecht, M., Lehman, J., Miikkulainen, R., & Stone, P. (2014). A Neuroevolution Approach to General Atari Game Playing. IEEE Transactions on Computational Intelligence and AI in Games, 6(4), 355–366. https://doi.org/10.1109/tciaig.2013.2294713
[4] Lehman, J., & Stanley, K. O. (2011). Abandoning Objectives: Evolution Through the Search for Novelty Alone. Evolutionary Computation, 19(2), 189–223. https://doi.org/10.1162/evco_a_00025
[5] Stanley, K. O. (2007). Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 8(2), 131–162. https://doi.org/10.1007/s10710-007-9028-8
[6] Stanley, K. O., & Miikkulainen, R. (2002). Evolving Neural Networks through Augmenting Topologies. Evolutionary Computation, 10(2), 99–127. https://doi.org/10.1162/106365602320169811
[7] Stanley, K. O., Clune, J., Lehman, J., & Miikkulainen, R. (2019). Designing neural networks through neuroevolution. Nature Machine Intelligence, 1(1), 24–35. https://doi.org/10.1038/s42256-018-0006-z
[8] Stanley, K. O., D'ambrosio, D. B., & Gauci, J. (2009). A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks. Artificial Life, 15(2), 185–212. https://doi.org/10.1162/artl.2009.15.2.15202
[9] Stanley, K. O., Lehman, J., & Soros, L. (2017, December 19). Open-endedness: The last grand challenge you've never heard of. O'Reilly Media. https://www.oreilly.com/radar/open-endedness-the-last-grand-challenge-youve-never-heard-of/.
[10] Wang, R., Lehman, J., Clune, J., & Stanley, K. O. (2019, February 21). Paired Open-Ended Trailblazer (POET): Endlessly Generating ... arXiv. https://arxiv.org/pdf/1901.01753.pdf.