The Matrix. The Terminator. Avengers: Age of Ultron. I, Robot. These films have one thing in common: robots powered by artificial intelligence (AI), who hunt down and harm human beings. The Matrix, for example, tells how human beings are used as batteries to make giant robots run. The Terminator, on the other hand, depicts how an AI network and its army of machines will take over the world in 2029.
There are hundreds of movies of this kind. And all of them depict a time in the future when robots will rule the earth. Humans need to fight AI-powered machines, and most of the time, there are tons of casualties on the humans’ side. After all, human flesh is no match for steel bodies and high-powered firearms.
Most of these movies are set in the middle to the latter part of the 21st century. And since it’s already 2020, we are almost there. What if these movies’ themes become true? How can we protect ourselves from giant AI-endowed robots that match our intelligence but don’t have emotions? Are Asimov’s Three Laws of Robotics the answer to our dilemma?
Three Laws of Robotics According to Isaac Asimov
Isaac Asimov was an award-winning science fiction author who wrote several books that range from nonfiction to fantasy and mystery. The movie I, Robot was based on a collection of Asimov’s popular Robot series, which also publicized the Three Laws of Robotics, namely:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.
Later on, Asimov added another law that superseded all three: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. This law became known as the “Zeroth Law.”
These laws are still used as guides in today’s development of robots and intelligent machines and have become a basis for other ethical guidelines. The question, though, is: Can they protect us from robots gone bad?
Can the Three Laws of Robotics Protect Humans?
It’s ironic how Asimov’s I, Robot seems to show that the Three Laws of Robotics can’t protect some human beings. If you watched the movie, you know that VIKI, the central AI computer, thought that human activities would cause human extinction, so it plotted to save humanity by enslaving and sacrificing the lives of some of them.
When we get closer to reality, there’s Robert Williams, the first human being ever recorded to have been killed by a robot, although not shown in the movie. Williams was an assembly line worker at Ford Motors who instantly died when the arm of a retrieval robot slammed against him as he was retrieving parts. The robot was five stories high, and its arm weighed a ton. It was built by Litton Industries primarily to gather motor parts in the company’s casting plant in Flat Rock, Michigan.
If Not the Three Laws, Then What?
The main issue with the Three Laws of Robotics is how to translate them into a language that the robots can understand. And even if developers do find a way to instill them into intelligent machines, their understanding may also evolve the same way as in I, Robot.
An alternative concept is empowerment, which was introduced by Christoph Salge and his colleagues at the University of Hertfordshire. They believe empowerment can ensure that robots behave safely and ethically. They only need to model how the real world works so robots will know how to deal with specific scenarios. Aside from the robot’s own empowerment, another goal is to let them focus on human beings’ empowerment. Using the empowerment concept, VIKI in I, Robot wouldn’t think about enslaving humans to save them from their own actions since such action would mean that humans won’t be empowered.
Salge cited this example: Instead of following the rule “don’t push humans” blindly, robots would avoid pushing humans. But at the same time, they will do so when the need arises. If there is a falling object, robots would be able to push humans out of the way. As a result, the human being would still be harmed, but not as much as if the robot didn’t push him/her.
Asimov’s Three Laws of Robotics was created in the 1940s, and so they may need to be tweaked to be compatible with today’s advancements in robotics. Even then, the concept of empowerment is also an excellent addition to any guideline that will be developed.