Evolving AIs using a NEAT algorithm

Photo by Mitya Ivanov on Unsplash

An Introduction to Neuroevolution

During my scouring of the internet, I found a paper called Evolving Neural Networks through Augmenting Topologies that discusses an algorithm commonly known as NEAT. It’s focused on evolving dense networks of variable structure through a process similar to natural selection.

I realize that this sounds unnecessary, why can’t you just use a normal neural network with a fixed structure? Well, once you look at the reasoning behind this decision it starts to make a lot of sense. if the structure of your network is being generated alongside the other parameters then you’ll probably end up with the best structure. Still not convinced? Well, let me fix that.

Photo by Jonatan Pie on Unsplash

What is NEAT?

Encoding Networks

So, how do we store this information? Well, I’ve got some images from the original paper that should help you visualize the method.

The input and output nodes of a given network are static, we won’t add any more and we won’t remove any. All of the other nodes are just stored as additions to that base array. As for connections between nodes, they’ve stored in a separate list along with some vital information like what nodes they are connecting, their weight, whether they’re enabled and their innovation number.

Take a good, long look at that innovation number because it’s probably one of the most important parts of a connection. This innovation number is a unique identifier for each connection and lets us compare networks, seeing how similar they are. If connections in two, distinct, networks share the same innovation number then we can guarantee that those connections are structurally identical.


  1. Add a connection between two unconnected nodes.
  2. Add a node between two nodes that are already connected, disabling the old connection & creating two new ones to preserve the structure of the network.
  3. Change the weight of a connection.

Keep in mind, since every mutation is completely random, it’s very rare for one to actually help a network. Because of this, we want to minimize the impact of mutations that change the structure of a model, #1 and #2.

  1. When adding a connection we can normalize the weight to be around 0, meaning that it will transfer very little information.
  2. When adding a node we can set the weight of the first connection to that of the previous connection and the weight of the second connection to 1. This means that the new node won’t change the output, only the structure.


Remember the innovation number stored within each connection? Now I get to tell you why it’s so important. Those innovation numbers let us align networks on top of each other and figure out what connections play the same role. It works like this:

  1. First of all, we need to select a dominant parent. This is the parent that will specify the structure of the child network.
  2. Secondly, we find all of the connections shared by both parents. You’d think that this would be difficult, but since have innovation numbers we can just take each connection whose innovation number appears in both parent networks.
  3. For any connection shared by both parents, we randomly give the child one of the connections. The ensures that the weights of any shared connection in the child network will randomly come from either parent.
  4. Finally, for any connections not shared by both networks, the child simply inherits them from the dominant parent.

Here’s a diagram that might help you visualize the process:

All in all, this process tends to produce functional child networks that share characteristics from both parents.


Well, speciation comes to the rescue. Basically what it says is that a network isn’t just judged by its own performance, but rather the performance of all similar networks. Think of it as a team sport. Some days you’ll be playing really well and carrying your team. Some days you’ll be off and struggling to even get a single point. It doesn’t make sense to score you on your worst days, so instead, we’ll just score the entire team.

Speciation splits up a population into several species, or teams, based on how similar they are to each other. How do we calculate their similarity? Well, innovation numbers are here to help again! We can just count how many connections they have that share innovation numbers and divide that by how many connections don’t share innovation numbers. Then, when we’re assigning a score to a network, we consider the score of the entire species, not just the individual network. This helps networks with new mutations to receive acceptable scores so that they can reproduce and mutate, slowly optimizing their mutations.

Bringing it all together

The parallels between NEAT and evolution don’t stop with their dynamic structure — the very way that we train these networks is also very similar to natural selection.

  1. Firstly we need a blank population of networks. Each of these networks will initially only have the input and output nodes — no hidden nodes or connections.
  2. We then randomly mutate this population. Add a connection here, mess with the weights there, so on and so forth.
  3. Now it’s time to evaluate each of the networks. Simulate your environment and test each of the networks individually, recording how well each one does.
  4. Time to start evolving! Throw out half of your population, namely the ones that didn’t do so well in the evaluation, and generate a new population with the well-performing half using the speciation, crossover and mutation techniques aligned above.

Then just repeat the simulation step until you’re satisfied with their performance! Over time the networks will learn to score better and better on your evaluation, by proxy learning to do exactly what you want.


I’ve created a few projects using the NEAT algorithm, so if you’re having trouble implementing it on your own or just want to chat feel free to reach out to me over LinkedIn. I’d love to help you create your own project using this fantastic algorithm.


  • This lets you create a network with an optimal structure for a given problem, making them faster and more effective.
  • NEAT works by a process similar to evolution, using populations, mutations & crossovers.

Thanks for reading my article! Feel free to message me on LinkedIn if you have anything to say, or follow me on Medium to get notified when I post another article!

Innovator at TKS Ottawa, AI enthusiast, Game developer, and lover of all the spacy nonsense I can find!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store