Earlier this year, Google CEO Sundar Pichai wrote an opinion piece for The Financial Times, where he called on tech companies and policymakers to fast-track regulations for the use of artificial intelligence. In particular, he stressed the need for governments and private entities to guarantee that artificial intelligence forms, such as facial recognition, are not being used for harmful means.
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone,” Pichai said in his article.
Pichai is not alone in this long-simmering discussion about artificial intelligence regulation. Just a few weeks after Pichai’s piece came out, Tesla CEO Elon Musk chimed in on the conversation. “All organizations developing advanced artificial intelligence should be regulated, including Tesla,” he posted on Twitter.
Indeed, the debate on the need for artificial intelligence policies has advanced rapidly over the past few months as think tanks and thought leaders draw more attention to it. So what can we expect as artificial intelligence penetration increases in several markets? What does regulation look like for companies deploying artificial intelligence technologies?
The Ongoing Artificial Intelligence Debate
Is artificial intelligence dangerous? This question has always been at the back of the minds of even the world’s greatest thinkers. The late Stephen Hawking, for instance, once opined that artificial intelligence has the capacity to endanger mankind unless we “regulate its development.” Microsoft founder Bill Gates shared the same sentiment and once said that he is “concerned about super intelligence.”
Initially, many of the beliefs that artificial intelligence poses risks to humans are speculative. It didn’t take long for engineers to notice that something was off with their creations. Some autonomous robots, for instance, are too devoted to complete their tasks no matter what. A few artificial intelligence models also developed biases based on their designers’ worldview. This phenomenon, of course, could have negative repercussions in the long run.
Unfortunately, not everyone is on the same boat when it comes to regulating artificial intelligence. Opponents of artificial intelligence regulation believe that it’s too early to tell whether artificial intelligence can indeed expose companies and users to troublesome situations. A panel of experts from Stanford University also thinks that careful consideration must be taken into account when regulating artificial intelligence, given its ambiguous definition and applications. For that reason, they deemed that regulating artificial intelligence is “misguided.”
What can We Expect from Artificial Intelligence Regulations?
Isaac Asimov’s “Three Laws of Robotics” explicitly states that robots should not harm people and vice versa. It also forbids the weaponization of robots. Regulators would probably model their policies around Asimov’s guidelines, although not in an absolutist manner.
The reasoning why policymakers would probably go easy on artificial intelligence developers is because the said technology is still instrumental in progress. Global industries have a lot of moving parts, and many of them are dependent on artificial intelligence. For instance, big data and self-driving vehicles are just two of the artificial intelligence-powered technologies that play an essential role in manufacturing, mining, and retail. Thus, heavily restricting the use of artificial intelligence could hamper the growth of sectors and economies in general.
Artificial intelligence will continue to influence many aspects of our society in the years to come. Thought leaders are correct in their assumption that artificial intelligence use requires some form of regulation. However, to assert that artificial intelligence can inflict serious harm to humans may be a bit alarmist at this point. It remains to be seen if artificial intelligence should be feared at all.