The risks and ethical dilemmas posed by artificial intelligence (AI), as depicted in countless Hollywood films like Transcendence and Ex Machina, seem more fiction than fact for most people. Unknown to most, though, such concerns are already affecting them. Whether they’re aware of it or not, some form of machine intelligence is making decisions for them, from the food they eat to their job prospects.
Fortunately, industry experts and watchdogs are starting to reassess the potential dangers of amoral, self-learning AI, thanks to a newfound awareness of the ethical decisions made during development and deployment. Soon enough, think tanks and ethical review committees followed suit, bringing us to today’s discourse on AI.
What Is Ethical AI?
Machine morality is an AI and robotics concept that computer scientists have been exploring since the late 1970s. It aims to address the ethical concerns that people have about the design and applications of AI and robots. Machine ethics has since expanded to include several new theories on AI consciousness and rights. At the heart of it all is the idea that AI should never lead to rash decisions that could impact human safety and dignity.
Focus on the subject of AI ethics has since intensified in the wake of AI-related accidents and similarly bothering news of AI gone wild. These vary from chatbots writing politically incorrect tweets to driverless cars running red lights.
While these were not the sole catalysts, researchers and journalists increasingly broached the subject to uncover the tone-deaf manner by which developers are perfecting their products. They’ve also begun to question what motivates vendors to run tests on whether AI products are biased or not.
Why Do We Need It?
For the longest time, people have always assumed that technology is neutral. Sadly, that’s not the case. We’ve seen how technology can be manipulated to take emotional bias with the recent Cambridge Analytica scandal when personal data of millions of Facebook users was gathered, analyzed and utilized for political advertising purposes.
Interestingly, AI developers are in for a surprise themselves. Amazon, for instance, found that an in-house software it was testing to cherry-pick the best applications preferred male applicants.
What about amoral AI? If you find the previous examples unsettling, wait until you see a video of Boston Dynamics’s robot, Spot Mini. Here’s an example:
The video shows how terrifyingly devoted the robot was in completing its task. It makes one wonder how AI and the unmanned equipment it powers would behave in life-threatening circumstances. How much trust should we put on these machines? How can morality be coded into them? These are things that ethical AI wants to answer.
Ethical Benchmarks
According to Singularity Hub’s recent study on Nature Machine Learning, while the ethical AI regulatory landscape is still a bit fragmented, five moral themes are common in various institutional guidelines, which include:
- Transparency: Manufacturers and vendors should always make the decision-making mechanism of an AI device transparent to users. This approach aims to prevent harm against humans and protect fundamental human rights.
- Nonmaleficence: Often used in medical contexts, the principle of nonmaleficence refers to “doing no harm.” AI algorithm designers should ensure that AI decisions don’t lead to physical or mental harm to users.
- Justice: Justice or fairness refers to the practice of monitoring AI to prevent it from developing bias, as was shown by Amazon’s case. It also refers to ensuring that AI systems are made accessible to all races and genders. The principle also entails taking a more sensitive approach to replacing jobs with AI-powered technologies.
The other two guidelines are responsibility and privacy. Researchers found that while loosely defined in various directives, these two principles also mattered in creating a better picture of what ethical AI should be.
Final Thoughts
The discussion on ethical AI continues to gain traction as more experts contribute to the conversation. Emerging AI ethical principles, such as transparency, fairness, and nonmaleficence, provide a good starting point for streamlining future guidelines. For now, more thought has to be put into current AI applications to steer them toward the right ethical direction.
