Artificial neural networks (ANNs) are computer programs that bring about revolutionary changes in how people see technology. In one of our previous articles, we’ve already answered some of the most frequently asked questions about artificial neural networks and explained them in simple terms. In this piece, we would like to dive deeper into the different types of artificial neural networks and help you get a better picture of how they impact modern technology.

In general, each type of neural network has its distinct strengths and use cases. We outlined some of them below. 

Contents

1. Feedforward Neural Network

Feedforward neural network or deep feedforward network is one of the simplest types of artificial neural networks. In this ANN, data goes through several input nodes (computational units also known as artificial neurons) until it arrives at an output node.

Simply put, information passes through in a single direction from an entry point or input node to an exit point or output node. It differs from other more complex ANN types in that it does not have feedback connections where the output from each layer of neurons is fed back to it for learning. A feedforward neural network does not form a cycle or loop to allow a program to learn.

Applications: Feedforward neural networks are used in computer vision and facial recognition applications.

2. Radial Basis Function Neural Network (RBFNN)

This type of ANN only has three layers—the input layer, the hidden layer, and the output layer. It is limited to a single hidden layer compared with other ANN types. The hidden layer is basically hidden in between input and output layers, and it reduces redundancies in data. Compared with other ANN types that can have several hidden layers, learning is faster in an RBFNN. 

To further understand what the different ANN layers are for, imagine that you want to inform your computer that the picture it is shown depicts a car. For the computer to understand, it needs separate tools (or layers). Your car detector can thus have a wheel detector so it can tell something has wheels. It should have a vehicle body detector, which could allow it to differentiate a car from a truck, and a size detector so it can do likewise. These are just some elements that make up hidden layers in artificial neural networks. They do not present the entire image but are parts of it.

Applications: RBFNNs can be used in complex power restoration systems. In case of a blackout, they can be used to restore electrical power to normal conditions with minimal losses and less societal impact. They can also be extensively applied for time-series prediction. An example would be in stock trading, where computers predict what stocks are likely to increase or decrease in value, allowing users to invest wisely.

3. Recurrent Neural Network (RNN)

This type of ANN is similar to a feedforward neural network, but it saves the output of a specific layer and feeds it back as input. As a result, it can help predict several possible outcomes from any particular layer.

Drilling down to specifics: If the first layer receives the output and sends it back as input, the next layers will start the recurrent neural network process. Each node retains a memory from the previous step. And so, the system remembers wrong predictions and learns from them to improve its next ones. In short, RNNs can learn from each step to predict the outcome in the next step.

Applications: RNNs are used in text-to-speech applications that predict what users may want to say next, depending on the context of their initial input.

4. Multilayer Perceptron (MLP)

This type of ANN has three or more layers that classify data that cannot be linearly separated (i.e., go through a straight path). Thus, it is fully connected, which means that each node within a layer is connected to the succeeding node in the next layer.

Applications: MLPs aid in speech recognition and machine translation technologies.

5. Convolutional Neural Network (CNN)

This type of ANN applies a different version of MLPs by having several layers that can be completely interconnected. The primary purpose of CNNs is to decipher specific features of a given image such as a face. They identify features based on how near or far each pixel (which makes up every facial feature) is from a reference point.

Applications: CNNs are widely used for accurate face detection even if the input image is of low resolution. They are also particularly useful for improving a self-driving car’s estimation of its driving field since they are very good at determining distances. Other applications include natural language processing (NLP), paraphrase detection, and image classification.

6. Long Short-Term Memory (LSTM)

LSTM networks are a subset of RNN that can learn about long-term dependencies. Since it is an RNN, LSTM is also a sequential network, which means it can remember information from the previous process and use this knowledge to process the current input.

However, RNNs cannot remember long-term dependencies, nor can they provide results for inputs that depend on information from way back in time. That is where LSTM comes in. It can choose to retain certain information for use in the future and forget irrelevant information. 

Application: LSTM networks take RNN applications one step further. For instance, RNN is helpful in grammar learning, speech recognition, and speech synthesis. LSTM can do all these but is also capable of semantic parsing or translating human utterances into machine-readable form.

7. Sequence-to-Sequence Models

Sequence-to-sequence models rely on encoder-decoder translation to process input into an output with a different length. Google introduced the concept in 2014, although its sequence-to-sequence models relied on RNNs. The illustration from Towards Data Science below sums up the concept behind sequence-to-sequence models.

Sequence-to-Sequence Model

Source: https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346

Application: Google uses the sequence-to-sequence model in Google Translate. It is also used in speech recognition, online chatbots, and video captioning.

8. Modular Neural Network

A modular neural network consists of independent neural networks that perform certain subtasks as part of the overall task meant for the network. The concept imitates how the human brain can compartmentalize thoughts and processes.

These independent neural networks serve as modules or small units of the whole network. An intermediary takes the outputs of each module and processes them to obtain the network’s overall output.

Application: Modular neural networks can solve complex artificial intelligence (AI) problems, including stock market forecasting and character recognition or the process of converting written or printed characters into a computer-understandable format.

The types of artificial neural networks above use different methods to achieve a desired outcome. However, all of them work in a way that resembles how neurons in our brains work. Like the neurons in the human brain, ANNs learn more and improve their functions every time they receive more data and are used more often. And just like our brain, which it mimics, its applications can also be limitless.

Can ANNs Truly Mimic the Human Brain?
Loading ... Loading ...