Artificial neural networks (ANNs) are computer programs that bring about revolutionary changes in how people see technology. In one of our previous articles, we’ve already answered some of the most frequently asked questions about artificial neural networks and explained them in simple terms. In this piece, we would like to dive deeper into the different types of artificial neural networks and help you get a better picture of how they impact modern technology.
In general, each type of neural network has its distinct strengths and use cases. We outlined some of them below.
1. Feedforward Neural Network
Feedforward neural network or deep feedforward network is one of the simplest types of artificial neural networks. In this ANN, data goes through several input nodes (computational units also known as artificial neurons) until it arrives at an output node.
Simply put, information passes through in a single direction from an entry point or input node to an exit point or output node. It differs from other more complex ANN types in that it does not have feedback connections where the output from each layer of neurons is fed back to it for learning. A feedforward neural network does not form a cycle or loop to allow a program to learn.
2. Radial Basis Function Neural Network (RBFNN)
This type of ANN only has three layers—the input layer, the hidden layer, and the output layer. It is limited to a single hidden layer compared with other ANN types. The hidden layer is basically hidden in between input and output layers, and it reduces redundancies in data. Compared with other ANN types that can have several hidden layers, learning is faster in an RBFNN.
To further understand what the different ANN layers are for, imagine that you want to inform your computer that the picture it is shown depicts a car. For the computer to understand, it needs separate tools (or layers). Your car detector can thus have a wheel detector so it can tell something has wheels. It should have a vehicle body detector, which could allow it to differentiate a car from a truck, and a size detector so it can do likewise. These are just some elements that make up hidden layers in artificial neural networks. They do not present the entire image but are parts of it.
Applications: RBFNNs can be used in complex power restoration systems. In case of a blackout, they can be used to restore electrical power to normal conditions with minimal losses and less societal impact. They can also be extensively applied for time-series prediction. An example would be in stock trading, where computers predict what stocks are likely to increase or decrease in value, allowing users to invest wisely.
3. Recurrent Neural Network (RNN)
This type of ANN is similar to a feedforward neural network, but it saves the output of a specific layer and feeds it back as input. As a result, it can help predict several possible outcomes from any particular layer.
Drilling down to specifics: If the first layer receives the output and sends it back as input, the next layers will start the recurrent neural network process. Each node retains a memory from the previous step. And so, the system remembers wrong predictions and learns from them to improve its next ones. In short, RNNs can learn from each step to predict the outcome in the next step.
Applications: RNNs are used in text-to-speech applications that predict what users may want to say next, depending on the context of their initial input.
4. Multilayer Perceptron (MLP)
This type of ANN has three or more layers that classify data that cannot be linearly separated (i.e., go through a straight path). Thus, it is fully connected, which means that each node within a layer is connected to the succeeding node in the next layer.
Applications: MLPs aid in speech recognition and machine translation technologies.
5. Convolutional Neural Network (CNN)
This type of ANN applies a different version of MLPs by having several layers that can be completely interconnected. The primary purpose of CNNs is to decipher specific features of a given image such as a face. They identify features based on how near or far each pixel (which makes up every facial feature) is from a reference point.
Applications: CNNs are widely used for accurate face detection even if the input image is of low resolution. They are also particularly useful for improving a self-driving car’s estimation of its driving field since they are very good at determining distances. Other applications include natural language processing (NLP), paraphrase detection, and image classification.
The types of artificial neural networks above use different methods to achieve a desired outcome. However, all of them work in a way that resembles how neurons in our brains work. Like the neurons in the human brain, ANNs learn more and improve their functions every time they receive more data and are used more often. And just like our brain, which it mimics, its applications can also be limitless.