An artificial intelligence (AI) accelerator is computer hardware that specifically handles AI requirements. It speeds up processes such as artificial neural network (ANN) tasks, machine learning (ML), and machine vision.
Back in the 1980s, graphics accelerators made PCs faster and more efficient by freeing up the main processor and handling all the graphics requirements. Similarly, AI accelerators free up the main processor from having to deal with complex AI chores that can be resource-intensive.
Other interesting terms…
Read more about “AI Accelerator”

Attempts to develop a standard AI hardware accelerator started as early as the 1990s with neural network accelerators. Let’s take a look at how this aspect of AI has evolved over the years.
Digital Signal Processors
From 1988 to 1993, the Adaptive System Research Department at Bell Labs developed a convolutional neural network that was able to recognize handwritten digits accurately. The use of Digital Signal Processors (DSPs) was one of the first attempts in developing an AI accelerator, and you can view the actual footage of the 1993 demo.
Heterogeneous Computing Architecture
Although not initially developed for AI acceleration, heterogeneous computing such as cell microprocessor has been used to carry out several AI tasks. For instance, the cell microprocessor architecture was notably used to predict successful weight loss in people who were obese. A cell microprocessor is a multicore microprocessor that uses a regular PowerPC core together (i.e., a type of main core) with other highly specialized coprocessors.
Graphics Processing Units (GPUs)
GPUs have been widely used for AI and machine learning (ML) work, even though these were intended to process images and their properties primarily. But since neural networks and image manipulation have a quite identical mathematical basis, GPUs are increasingly being used for AI acceleration.
Field-Programmable Gate Array (FPGA)-Based Accelerators
FPGA was explored for AI functions in the 1990s and is still being adopted to accelerate ML and deep learning. FPGA-based accelerators are more power-efficient compared with other AI accelerators. They are also more flexible because they have programmable components.
Application-Specific Integrated Circuits (ASICs)
An ASIC is an integrated circuit (IC) chip that was made for a specific use, unlike FPGA-based accelerators and GPUs. ASICs were tailor-made for AI functions. As such, they can be better than FPGA-based accelerators and GPUs in terms of performance. However, an ASIC is very expensive to develop.