By now, you’re probably no stranger to the fact that artificial intelligence (AI) is all around you. As an invisible force in your everyday lives, it influences your decisions and serves you choices that correlate with your interests, wants, and needs. It provides you recommendations on products to buy, the latest headlines to read, and even the friends you should catch up with.

Most people would dismiss this algorithm’s behavior as nothing more than a quirk. It’s not too difficult to brush off since it doesn’t seem to have many implications in our daily lives. But what if such algorithmic behavior contributes to handing down an unjust prison sentence? What if it misidentifies your race or gender? It turns out that’s what happens when intelligent algorithms form and exhibit algorithmic bias.

What is Algorithmic Bias?

Algorithmic bias refers to the manifestation of harmful biases in AI. For years, people assumed that AI is entirely neutral and won’t inherit the prejudices of its designers. Unfortunately, that’s not what a few new AI systems have shown (more on that later).

A possible reason for this is that AI relies on taking in an incredible volume of datasets to self-learn. For machine learning (ML) and deep learning algorithms to recognize a type of animal or structure, you need to feed it with thousands of images. At times, however, trainers erroneously provide AI systems with too little or too much of a particular set of images. For example, if you gave the AI tons of pictures of white Persian cats, it might have some trouble identifying an orange tabby.

Examples of Algorithmic Bias

Here are some recent examples of algorithmic bias in some well-known AI projects:

1. Recruitment Bias

Amazon discontinued its use of an AI software for recruitment after it demonstrated a preference for hiring only male applicants. The AI acted this way as the majority of the resumes it encountered for its training came from male candidates.

2. Social Grading

If you think social merits and demerits happen only in a Black Mirror episode, think again. China has already implemented a type of social grading system where people get financial and social perks based on how they behave. The intel came from facial recognition systems all over the country.

3. Offensive Chatbot

In 2016, Microsoft launched its newest experiment Tay, a Twitter bot that went from posting “humans are super cool” to not safe for work (NSFW) messages in 16 hours. It tweeted things such as, “Hitler was right” and “feminism is cancer” after Internet trolls interacted with it. Microsoft shelved the project.

What Can Be Done?

Among the efforts to stave off algorithmic bias is the introduction of new legislative bills and guidelines for the development of AI. The Algorithmic Accountability Act, for instance, was passed by legislators in 2019, mandating tech companies to audit their AI algorithms and take the appropriate action when biases emerge. The bill also touched on the handling of data used to train AI systems.

Thought leaders, meanwhile, encouraged tech organizations to develop “algorithmic hygiene” by respecting users’ sensitive data, especially if it indicates their association with a protected class. They suggested that online proxies, such as a person’s zip code or neighborhood, should be carefully examined—and excluded as necessary—from training data, to prevent AIs from perpetuating racial, gender, or socioeconomic biases.


Societies need to make equal sacrifices to make a few great strides. That has been the case since time immemorial. Given the tremendous relief and assistance that AI-powered devices provide, can we turn a blind eye on their potential negative impact? As lawmakers and industry leaders work on a solution, we leave you this question: Is algorithmic bias worth it? 

We Can Successfully Put an End to Algorithmic Bias.
Loading ... Loading ...