Neural networks are complex things. They can seemingly simulate the human mind and beyond, completing tasks previously thought of as man-only and outperforming even humanity in those fields. They can be adapted into a seemingly infinite amount of scenarios and can become so advanced that the inner mechanisms of how they work become unknown to even their creators. Terms such as Q-Learning, RNN, and CNN can be complicated and intimidating, but at the end of the day, the overview of the way an AI trains itself can typically be understood. In fact, a version of one of these methods is taught to literal elementary schoolers; the process of evolution.

Now, evolution may seem like a strange way for a robot to learn something, but it’s actually in nature as well. A deer learns how to walk not from a lesson by their parents but by the wiring of their brains, formed by evolution and adaptation. The basic principles of Darwinian evolution can be used by computers to solve a problem by a method very similar to guess-and-check. In fact, this system is used in practically every neural network and can be separated into two steps; adjustment and grading. The example that will be looked through this article will be an AI looking to find the equation of a line based on a few given points, a task extremely easy for humans. To demonstrate, this AI will be written in JavaScript throughout the article, and to save time, this AI will be called Bob.
```
js
const random = () => (Math.random() - 0.5) * 200; // Generates value from -100 to 100
const m = random();
const b = random();
const equation = (x) => m * x + b; // Generates random linear equation
const samplePoints = new Array(5)
.fill(0)
.map((x, i) => [i - 2, equation(i - 2)]); // Generates points with x-values from -2 to 2
console.log(`Sample Equation: y=${m}x + ${b}`);
```

The first step to AI evolution is the creation of many AIs, and at first, these machines are practically useless. They take in some inputs, do completely random stuff with them, and give a practically useless output. In the case of Bob, this would be generating completely random values for m and b in the equation y=mx+b. However, there is a difference between all of these AIs, and some do marginally better than others, but the only way to tell which does better is to grade each of their performances.
```
js
// Initialize a bunch of stupid AIs
let bots = new Array(25).fill(0).map((x) => {
const m = random();
const b = random();
return {
m,
b,
run: (x) => m * x + b, // Returns f(x) = mx + b
};
});
```

The other necessity to a functioning neural network is the ability to tell how good the machine does. For an evolutionary AI, this means rating the functionality of each AI to a high level of precision. For Bob, some sample points could be provided and the rating could be the average of how close each x-value, from the sample set, ran through it was to the y-value.
```
js
const rate = (ai) => {
let sum = 0; // point[0] is x, point[1] is y
samplePoints.forEach((point) => sum += Math.abs(point[1] - ai.run(point[0])));
return sum / samplePoints.length; // Returns the average deviation from the correct value
}; // The rating is how far off the robot is from success
```

Now that we can rate how each AI is doing, we can rate all of them and find out which AI did the best. In Bob’s case, we could map the ratings of each AI to an array and find the lowest rating.
```
js
// Gets the bot that has the lowest rating
const botRatings = bots.map((bot) => rate(bot));
const bestBotAccuracy = Math.min(...botRatings);
const bestBot = bots[botRatings.indexOf(bestBotAccuracy)];
```

Now, we can take the best bot and replace all the other bots with slightly modified versions of the best bot by adding a random number from -.5 to .5 times the modification factor. Because of how this method will narrow down onto a near-perfect bot, the amount each value should be modified by should also gradually decrease, and the amount this decreases by is important. For Bob, we can have this value be the best bot’s rating times 2 for M and the rating times 4 for B. These are just some values that work well after testing and changing them just impacts how close the AI gets to the actual sample after a set amount of cycles.
```
js
bots = bots.map(() => { // Generates modified versions of the bots
const m = bestBot.m + (Math.random() - .5) * (bestBotAccuracy * 2);
const b = bestBot.b + (Math.random() - .5) * (bestBotAccuracy * 4);
return {
m, b,
run: (x) => m * x + b
};
});
bots[0] = bestBot; // Makes sure the program can't go backwards in its learning
```

Now that we have this much done, we can re-rate and regenerate bots until the desired result is achieved by a threshold or after a certain amount of cycles. Cycles are more useful for tuning the bot’s parameters such as the modification factor’s change while a threshold is more useful for actual use. Let’s recap on the current progress with Bob:
`````
js
const random = () => (Math.random() - 0.5) * 200; // Generates value from -100 to 100
const m = random();
const b = random();
const equation = (x) => m * x + b; // Generates random linear equation
const samplePoints = new Array(5)
.fill(0)
.map((x, i) => [i - 2, equation(i - 2)]); // Generates points with x-values from -2 to 2
console.log(
```

Sample Equation: y=${m}x + ${b}`);

// Initialize a bunch of stupid AIs let bots = new Array(25).fill(0).map((x) => { const m = random(); const b = random(); return { m, b, run: (x) => m * x + b, // Returns f(x) = mx + b }; });

const rate = (ai) => { let sum = 0; // point[0] is x, point[1] is y samplePoints.forEach( (point) => (sum += Math.abs(point[1] - ai.run(point[0]))), ); return sum / samplePoints.length; };

// Gets the bot that has the lowest rating const botRatings = bots.map((bot) => rate(bot)); const bestBotAccuracy = Math.min(...botRatings); const bestBot = bots[botRatings.indexOf(bestBotAccuracy)];

bots = bots.map(() => {
// Generates modified versions of the bots
const m = bestBot.m + (Math.random() - 0.5) * (bestBotAccuracy * 2);
const b = bestBot.b + (Math.random() - 0.5) * (bestBotAccuracy * 4);
return {
m,
b,
run: (x) => m * x + b,
};
});
bots[0] = bestBot; // Makes sure the program can't go backwards in its learning
```
We can now wrap the regeneration of bots into a function named cycle and run it as many times as we want (eg. 10 times) and log the final bot’s equation and accuracy.
```

js
const random = () => (Math.random() - 0.5) * 200; // Generates value from -100 to 100
const m = random();
const b = random();
const equation = (x) => m * x + b; // Generates random linear equation
const samplePoints = new Array(5)
.fill(0)
.map((x, i) => [i - 2, equation(i - 2)]); // Generates points with x-values from -2 to 2
console.log(`Sample Equation: y=${m}x + ${b}`

);

// Initialize a bunch of stupid AIs let bots = new Array(25).fill(0).map((x) => { const m = random(); const b = random(); return { m, b, run: (x) => m * x + b, // Returns f(x) = mx + b }; });

const rate = (ai) => { let sum = 0; // point[0] is x, point[1] is y samplePoints.forEach( (point) => (sum += Math.abs(point[1] - ai.run(point[0]))), ); return sum / samplePoints.length; };

function cycle() {
// Gets the bot that has the lowest rating
const botRatings = bots.map((bot) => rate(bot));
const bestBotAccuracy = Math.min(...botRatings);
const bestBot = bots[botRatings.indexOf(bestBotAccuracy)];
console.log(`Current Accuracy: ${bestBotAccuracy}`

);
bots = bots.map(() => {
// Generates modified versions of the bots
const m = bestBot.m + (Math.random() - 0.5) * (bestBotAccuracy * 2);
const b = bestBot.b + (Math.random() - 0.5) * (bestBotAccuracy * 4);
return {
m,
b,
run: (x) => m * x + b,
};
});
bots[0] = bestBot; // Makes sure the program can't go backwards in its learning
}

for (let i = 0; i < 9; i++) { cycle(); }

const botRatings = bots.map((bot) => rate(bot));
const bestBotAccuracy = Math.min(...botRatings);
const bestBot = bots[botRatings.indexOf(bestBotAccuracy)];
console.log(`Equation: y=${bestBot.m}x + ${bestBot.b}`

);
console.log(`Accuracy: ${bestBotAccuracy}`

);
```
Over the course of 10 cycles, it is clear how well this AI works with the accuracy of the function consistently being under .001. Due to the nature of a line, giving only two points can make it far more inaccurate as many lines can get close to those two points as long as it passes in between them. A solution would be making the sample x-values further away. This is also an example of how giving more data to an AI can make it perform far better. The speed of the AI could also be changed by changing the amounts m and b get modified by (the *2 and *4), so an even greater challenge could be making an AI that can train these AIs the fastest.