Artificial Neural Networks (ANNs) are inspired by how our brains function. Just like how our brains have 86 billion neurons that connect and send signals to each other, ANNs are made up of artificial “neurons”. 

Perceptrons

The development of ANNs really took off with the invention of the microscope which let us see neurons and their connections. Inspired by this, AI researchers came up with a very simple type of ANN in the 1950s, called a perceptron. The perceptron was designed to make basic decisions—similar to a simplistic version of how our brains process information.

The perceptron takes one or more inputs, usually just 0s and 1s, and produces an output of 0 or 1 as well. The way it decides what the output should be depends on the inputs and some special values called weights. A weight is simply the strength of the connection between different neurons—similar to how the brain strengthens connections based on learning. When an input is fed into the perceptron, it’s multiplied by its corresponding weight, and then all those values are summed up. After that, a bias is added (a bit like an extra nudge to get a specific output). Another way the perceptron determines if the output should be a 0 or a 1 is based on something called activation function. If the total value (inputs times weights plus the bias) is greater than zero, the output is 1; if it’s not, the output is 0. This is kind of like how a neuron in the brain “fires” when it reaches a certain threshold—if it gets enough input, it sends a signal; otherwise, it stays quiet.

But the simple “yes or no” decision (0 or 1) isn’t always ideal, because it can lead to sudden changes in the output with tiny changes in the input, which makes learning unstable. To remedy this, AI researchers introduced the sigmoid function, which gives a smoother output that ranges between 0 and 1 rather than just flipping between the integers 0 and 1. This makes the whole learning process more gradual, like how biological neurons might respond more subtly to different levels of signals.

The goal of the perceptron is to learn the correct output for a given input, and it does that by adjusting its weights through training. If the output is correct, great; if not, the weights must be tweaked until it gets it right. Over time, as the perceptron gets better at adjusting these weights, it can make more accurate predictions based on the inputs it receives—similar to how we learn to recognize patterns or solve problems through repetition and experience.