When we speak of artificial intelligence (AI), we talk about computers or machines made to mimic human behavior, particularly the way we think and arrive at decisions. Symbolic artificial intelligence is the earliest approach toward this end, and it relies heavily on the following concepts:
- Humans think using symbols.
- Computers operate using symbols.
- Computers can be trained to think.
Symbolic AI rests on the idea that we use symbols to come up with a particular solution, such as the way we solve a mathematical problem. The “+” symbol represents addition, and the answer comes after the “=” sign. We use symbols to define even the simplest things (house, table, chairs, etc.) and characterize people (man, woman, doctor, lawyer, etc.). We also use symbols to describe actions like running, eating, and typing.
Human communication widely makes use of symbols, which is one of the things that makes us intelligent. As such, symbols have also become the basis of creating AI. Symbolic AI was the frontrunner in the technology’s creation, which is why it is also called “classical AI” or “good old-fashioned AI (GOFAI).”
Symbolic AI: Digging into the Past
Search for “AI” on any search engine, and you’d get results that talk about deep learning, machine learning (ML), and artificial neural networks (ANNs). That wasn’t always the case. Symbolic AI precedes these approaches and what scientists focused on during the 1950s until the 1980s.
While experts found some flaws regarding symbolic AI (more on this later), to say that it is no longer applied is not necessarily true. Symbolic AI successfully led to natural language processing (NLP). And to this day, it is still used in modern expert systems, such as the ROSS platform, a legal research AI that assists law firms in researching court cases.
Areas where Symbolic AI Succeeded
Quite several AI mechanisms are based on symbolic AI. Among them are:
1. Constraint Satisfaction
Constraint satisfaction is the process of solving a problem by satisfying certain constraints or conditions. For example, you are asked to color a map using red, yellow, and green only, and you can’t use the same color for two adjacent areas.
NLP is a branch of AI that enables machines to analyze human language, allowing people to communicate with them. Typical applications of NLP are smart assistants like Siri and Alexa, predictive text applications, and search engine results.
3. Logical Inferences
Symbolic AI relies heavily on rules, so it only makes sense that it is effectively used in logical inferences. Machines can generate conclusions based on given rules and evidence.
Where Symbolic AI Fell Short
In the 1990s, experts were ready to move on from symbolic AI when they saw that it fell short when it came to common sense knowledge problems. Since Symbolic AI relies on explicit representations, developers did not take into account implicit knowledge, such as “Lemon is sour,” or “A father will always be older than his children.” Our world has too much implicit knowledge to ignore.
Also, while symbolic AI has excellent reasoning capabilities, developers found it difficult to instill learning capabilities into it. Part of human intelligence, of course, is our ability to learn. Since symbolic AI can’t learn by itself, developers had to feed it with data and rules continuously. They also found out that the more they feed the machine, the more inaccurate its results became.
As such, they explored AI subsets that focus on teaching machines to learn on their own via deep learning.
Although symbolic AI falls short in some areas, it did start the ball rolling toward the development of AI. It’s still being used today. Experts are also looking into using symbolic AI alongside neural networks to help advance AI in general.