Evolving AIs using a NEAT algorithm

Robert MacWha
7 min readMar 4, 2021
Photo by Mitya Ivanov on Unsplash

An Introduction to Neuroevolution

Recently I’ve been really interested in getting my computer to play games by itself. Call me lazy, but with so much to do every day it’s nice to automate some of the less productive tasks. I did some reading and came across a technique called neuroevolution — a method for using natural evolution to create a structure for a neural network.

During my scouring of the internet, I found a paper called Evolving Neural Networks through Augmenting Topologies that discusses an algorithm commonly known as NEAT. It’s focused on evolving dense networks of variable structure through a process similar to natural selection.

I realize that this sounds unnecessary, why can’t you just use a normal neural network with a fixed structure? Well, once you look at the reasoning behind this decision it starts to make a lot of sense. if the structure of your network is being generated alongside the other parameters then you’ll probably end up with the best structure. Still not convinced? Well, let me fix that.

Photo by Jonatan Pie on Unsplash

What is NEAT?

NEAT — Neuroevolution of Augmenting topologies — is an approach to machine learning that works similarly to evolution. In its simplest form, NEAT is a method for generating networks that can accomplish a specific task such as balancing a pole or controlling a robot. Importantly, NEAT networks are able to learn with a reward function rather than back-propagation, meaning that you don’t need a giant dataset to train it on.

Encoding Networks

The most important characteristic about NEAT networks is that they aren’t static. This factor is the reason that they’re able to find such effective solutions, they are in essence able to create any architecture. However, having your network be a random connection of nodes does make it a lot harder to represent in code — Every single connection has to be recorded.

So, how do we store this information? Well, I’ve got some images from the original paper that should help you visualize the method.

The input and output nodes of a given network are static, we won’t add any more and we won’t remove any. All of the other nodes are just stored as additions to that base array. As for connections between nodes, they’ve stored in a separate list along with some vital information like what nodes they are connecting, their weight, whether they’re enabled and their innovation number.

Take a good, long look at that innovation number because it’s probably one of the most important parts of a connection. This innovation number is a unique identifier for each connection and lets us compare networks, seeing how similar they are. If connections in two, distinct, networks share the same innovation number then we can guarantee that those connections are structurally identical.

Mutations

Since the networks start off completely empty, we need some way for them to change. In normal machine learning this would be done with back-propagation, slowly updating the weights and biases of individual neurons. The NEAT algorithm handles this process a little differently, using a process very similar to natural evolution — random mutations. These mutations come in the following three forms:

  1. Add a connection between two unconnected nodes.
  2. Add a node between two nodes that are already connected, disabling the old connection & creating two new ones to preserve the structure of the network.
  3. Change the weight of a connection.

Keep in mind, since every mutation is completely random, it’s very rare for one to actually help a network. Because of this, we want to minimize the impact of mutations that change the structure of a model, #1 and #2.

  1. When adding a connection we can normalize the weight to be around 0, meaning that it will transfer very little information.
  2. When adding a node we can set the weight of the first connection to that of the previous connection and the weight of the second connection to 1. This means that the new node won’t change the output, only the structure.

Crossovers

One big problem with NEAT is the question of how to combine two disparate networks. If they have the same structure then it’s easy — just randomly give the resulting network connections from either parent. The problem arises when they aren’t identical.

Remember the innovation number stored within each connection? Now I get to tell you why it’s so important. Those innovation numbers let us align networks on top of each other and figure out what connections play the same role. It works like this:

  1. First of all, we need to select a dominant parent. This is the parent that will specify the structure of the child network.
  2. Secondly, we find all of the connections shared by both parents. You’d think that this would be difficult, but since have innovation numbers we can just take each connection whose innovation number appears in both parent networks.
  3. For any connection shared by both parents, we randomly give the child one of the connections. The ensures that the weights of any shared connection in the child network will randomly come from either parent.
  4. Finally, for any connections not shared by both networks, the child simply inherits them from the dominant parent.

Here’s a diagram that might help you visualize the process:

All in all, this process tends to produce functional child networks that share characteristics from both parents.

Speciation

Speciation is probably one of the strangest concepts in the NEAT algorithm. Basically, it goes like this: since random mutations generally make a network worse not better, random mutations will be discouraged. At the same time, we really want the networks to mutate since that’s the only way that they can learn. Seems like a paradox, doesn’t it? They can’t learn if they don’t mutate, but when they mutate they do worse.

Well, speciation comes to the rescue. Basically what it says is that a network isn’t just judged by its own performance, but rather the performance of all similar networks. Think of it as a team sport. Some days you’ll be playing really well and carrying your team. Some days you’ll be off and struggling to even get a single point. It doesn’t make sense to score you on your worst days, so instead, we’ll just score the entire team.

Speciation splits up a population into several species, or teams, based on how similar they are to each other. How do we calculate their similarity? Well, innovation numbers are here to help again! We can just count how many connections they have that share innovation numbers and divide that by how many connections don’t share innovation numbers. Then, when we’re assigning a score to a network, we consider the score of the entire species, not just the individual network. This helps networks with new mutations to receive acceptable scores so that they can reproduce and mutate, slowly optimizing their mutations.

Bringing it all together

So, now we have a network that’s able to mutate, combine with others, and spit out predictions. How do we turn that into a functional robot AI? With a cheeky bit of simulation, that’s how.

The parallels between NEAT and evolution don’t stop with their dynamic structure — the very way that we train these networks is also very similar to natural selection.

  1. Firstly we need a blank population of networks. Each of these networks will initially only have the input and output nodes — no hidden nodes or connections.
  2. We then randomly mutate this population. Add a connection here, mess with the weights there, so on and so forth.
  3. Now it’s time to evaluate each of the networks. Simulate your environment and test each of the networks individually, recording how well each one does.
  4. Time to start evolving! Throw out half of your population, namely the ones that didn’t do so well in the evaluation, and generate a new population with the well-performing half using the speciation, crossover and mutation techniques aligned above.

Then just repeat the simulation step until you’re satisfied with their performance! Over time the networks will learn to score better and better on your evaluation, by proxy learning to do exactly what you want.

Conclusion

So, now you should be familiar with the basics of neuroevolution. Feel free to look at some of the systems beyond NEAT such as HyperNEAT, ES-HyperNEAT or CoDeepNEAT. All of these papers focus on expanding the basics of NEAT that I outlined in this article.

I’ve created a few projects using the NEAT algorithm, so if you’re having trouble implementing it on your own or just want to chat feel free to reach out to me over LinkedIn. I’d love to help you create your own project using this fantastic algorithm.

TL;DR

  • NEAT — Neuroevolution of Augmenting Topologies, is a way of creating neural networks with dynamic structures.
  • This lets you create a network with an optimal structure for a given problem, making them faster and more effective.
  • NEAT works by a process similar to evolution, using populations, mutations & crossovers.

Thanks for reading my article! Feel free to message me on LinkedIn if you have anything to say, or follow me on Medium to get notified when I post another article!

--

--

Robert MacWha

ML Developer, Robotics enthusiast, and activator at TKS.