Any sufficiently advanced technology is indistinguishable from magic - Clarke's Third Law

Generation
0
Highscore
0
Agents (alive)
0
Explaination

The red agents are each connected to a neural network that makes 'guesses' (linear algebra probability calculations) about where the next gaps in the obstacle will be. Watch as they 'learn' from the mistakes of previous generations. On average 20-50 generations and the network will have mastered the prediction. Every once in a while the algorithm gets lucky and on the first generation masters the prediction.

The neural network begins to get better at the calculations as each new generation is populated by clones of the 2 agents that performed best in the previous round. These 'guesses' that the neural networks make are an array of probality scores between 1 and 0. The algorithm, Sigmoid, gives a list of predictions of what point on the X axis the next gap will appear. So the result is something like: [ .9090897, .808239239, .09723834, .99982362389 ...... ] (one probability for each pixel on teh X axis). The closer the probability is to 1 the more likely the neural network thinks the gap will appear in that section of the X axis. The agent then uses these probabilities to move towards the pixel position with the highest calulated probability that the next gap will appear. In the process simulating 'intelligence'.

I think it is in this space, the somewhere inbetween binary, domain of human intelligence, that machines are beginning to enter into. It is in this space that technology appears to rise out of it's binary limitations and becomes god-like in our imagining. Machines are reliably faster (not nessecarily better) at making these grey-area predictions about our physical world so we stand in awe. In the end we are fooled and we have projected intelligence onto a deterministic process.

What we don't see underneath the 'intelligence' of the algorithm is the many millions of wrong guesses that the neural net needs to make in order to get to a more accurate result. There is a massive amount of wrong guessing and random errors. These get filtered out by a very effective feedback loop that helps the algorithm correct it's course on the next prediction.

Perhaps technology (the internet specifically) is giving human beings a taste of that corrective feedback loop. Here I am thinking of BLM or the #METOO movement. These can be seen, in some way, as iterations of cultural predictions (convictions?) that are feeding back into the cultural system and moving us closer to the desired result. Not completely unlike the agents learning from the success / failures of the other nodes in the network.