Many argue that the AI boom poses serious challenges to human rights protection. One of them is Mathias Risse, professor of Philosophy and Public Policy at the Harvard Kennedy School of Government. Risse provides an argument for this in his article “Human Rights and Artificial Intelligence: An Urgently Needed Agenda.” He believes that the root of human rights is to put human superiority above all things, while AI’s primary goal is to make machines as good as or even better than humans, and therein lies the problem.
This article will provide you with an overview of the article and professor Risse’s most important ideas and findings.
AI-related Fears and Concerns
Counterarguments to prolific AI usage include the fact that intelligence or the ability to predict future outcomes should be left to humans, something that’s innate to AI systems. Today’s advanced machines, including smartphones, self-driving cars, and the like, are now capable of “thinking” for themselves and even helping their owners make decisions.
AI algorithms let machines do anything that can be coded so long as they have access to the data they need, the required processing speed, and the design frame that dictates task execution. Big data has been powering the effectiveness of these algorithms. Through so-called “machine learning (ML),” systems can infer or predict what happens next through pattern detection.
But what does that have to do with human rights? At times, algorithms work better than humans despite human biases, leading organizations to choose them over human workers. And even though for now the concept of superhuman AI still remains a myth, people are afraid that one day, AI systems will become way better than humans and take over their jobs.
Another valid concern has to do with AI-enabled human augmentation. Will augmented humans be better than their nonaugmented counterparts? And if that’s the case, will they fare better in work and life in general?
Security, specifically cybersecurity, is also a significant issue. It is pretty common, after all, to hear about organizations getting hacked and threat actors taking over their connected systems and stealing their data. Some researchers even believe that automation is bound to lead to an increase in cyberattacks should threat actors use AI to automate tasks.
Fears of suffering from robot takeovers, even if they’re still unfounded, are thus constantly surfacing.
What Does the Proliferation of AI Have to Do with Human Rights?
Some experts, including Risse, believe that AI will blur the line between humans and nonhumans. Images of personhood, he opined, may change once it becomes possible to upload and store a digitalized brain on a computer, just like how human embryos are stored. And events like these are not easing worries:
- A U.S. colonel called off a landmine-sweeping exercise in 2007 when the robot they were using continued with its work despite losing its legs. He considered the operation inhumane.
- A humanoid robot created by Hanson Robotics named “Sophia,” which can participate in interviews, became a Saudi citizen in October 2017. Later on, Sophia was named the first-ever nonhuman United Nations Development Programme (UNDP) Innovation Champion.
- Jeff Bezos, Amazon founder and CEO, recently adopted SpotMini, a robotic dog that could open doors, pick himself up, and load the dishwasher.
All these events have not been easing the doom-and-gloom scenarios depicted in movies like “Captain Marvel,” “Avengers: Age of Ultron,” and “I, Robot.” Several AI counter arguments have been raised, though, including those in the next sections.
Pure Intelligence and Morality
How are rationality and morality connected? This question emerges when talks about the morality of pure intelligence arise. Pure intelligence, according to Risse, results in singularity or the moment when machines surpass humans in intelligence. Should that happen, we will have successfully created something smarter than us that works at more incredible speed. Such is the nature of superintelligence. And while it might take decades or hundreds of years to realize these concepts, the recent exponential technological advancements should put these topics on the human rights agenda.
Some argue that the Three Laws of Robotics and moral or ethical AI are enough to assuage uncertainties. But David Hume thinks reason can’t fix values. A being endowed with reason, rationality, or intelligence, such as an AI system, may have any goal and attitudes, especially toward human beings. The problem is, how would people know if a machine’s thoughts are misguided?
Immanuel Kant, meanwhile, views rationality as a product of morality. “Categorical Imperative” asks all rational beings aka humans to never use their abilities or those of others to get their way, such as gratuitous violence and deception. Every sentient being’s actions, including AI systems, should thus pass a generalization test. That way, they could control their actions. If Kant’s argument is correct, a superintelligent machine can become a role model for ethical behavior, but only if its makers adhere to ethical AI.
Risse hopes for a superintelligent machine that could outperform human reason so human life can receive some protection because it is worthy of respect. While we can’t know that for sure, we have no reason to think negatively.
Human Rights and Value Alignment
All of the issues so far discussed are only speculations. We’re not even sure if this future will materialize. But from a human rights standpoint, the scenarios matter because we need to learn to share the social world we built with new sentient beings. That’s why we have the UDHR. Philosophically, it is justifiable to provide special protection to humans in the form of individual entitlements or rights. But that doesn’t mean we can do anything we want to other beings or the environment.
Things would be very different with intelligent machines, though, according to Risse. While we can control animals, that might not be possible with AI systems. We would need rules for a world where devices can be considered sentient beings. They would need to be configured to respect human rights even if they are intelligent and strong enough to violate them. At the same time, they would also need protection. As a result, it is not impossible for the UDHR to apply to some of them eventually.
To get off to a good start, Risse suggests that the problem of value alignment be addressed. It is crucial to align AI systems’ values with ours to prevent complications stemming from different value commitments, as implied by the UN Guiding Principles on Business and Human Rights, which integrates human rights into business decisions. These should apply to AI as well.
Isaac Asimov’s Three Laws of Robotics is a good starting point. All AI systems should abide by these principles:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
While these laws may be too unspecific, they certainly espouse what AI systems should be. Attempts to make them more specific have also been made over time, such as in a Beneficial AI conference held in Asilomar in 2017, where principles to guide further AI development were formulated.
Of the 23 Asilomar Principles, 13 are categorized under “Ethics and Values.” These principles insist that wherever AI systems cause harm, their creators should know why. If an AI device is involved in judicial decision making, its reasoning should be verifiable by human auditors. These statutes respond to concerns with AI systems that use ML algorithms. They might work at such speed and access too much data, making their decisions increasingly complex. If that happens, it might be impossible for human auditors to spot analyses that go astray. In terms of value alignment, highly autonomous AI devices should be designed so their goals and behaviors are aligned with human values throughout their operation.
We also need more interaction among the human rights and AI communities so that we won’t create a future without inputs from human rights activists. An essential step toward this direction, according to Risse, is Amnesty International’s decision to make extensive use of AI devices for human rights causes. The organization piloted ML use in human rights investigations. It is also considering ML’s potential for discrimination, particularly concerning policing, criminal justice, and access to essential economic and social services. More generally, it is concerned about the impact of automation on society, including the right to work and livelihood. We need more of such engagement between the human rights movement and the engineers behind AI development.
Artificial Stupidity, What Is It?
At present, more immediate problems are being seen. An example would be the algorithms used in healthcare and insurance underwriting and parole decision making, which may be threatening anti-discrimination provisions. Freedom of speech and expression are also being undermined by fake news, including counterfeit videos or deep fakes, both products of AI.
The more dependent political decisions are on the Internet and social media, the more governance is threatened by technological advances, including deploying sophisticated bots to participate in online debates or hack vote-counting devices.
Sadly, where AI is, artificial stupidity or “efforts made by adversaries not only to undermine gains made possible by AI but to turn them into their opposite,” as Risse defines it, follows. One example would be the alleged Russian manipulation of the 2016 U.S. elections, which many senators could not even contemplate.
AI use could threaten judicial rights without sufficient transparency and the possibility for human scrutiny. While using AI in court proceedings could improve the poor’s access to legal advice, it might also lead to a Kafkaesque situation, according to Risse. If algorithms provide absolute advice whose bases can’t be subjected to human scrutiny, how are we to know if the decision made is right?
The right to security and privacy is also potentially undermined through increased traceability in a world that electronically records all human activities and presence. Over time, the amount of data available can only increase enormously, especially through the use of biometric sensors that supposedly only monitor users’ health. Civil and political rights could also arise from the sheer existence of data collected by various entities, including AI manufacturers. One day, it’s possible for the leading AI companies to become even more powerful than oil companies once were.
If we don’t harness the power of Alphabet, Apple, Facebook, or Tesla for the public good, we might eventually find ourselves in a world they dominate. We learned that the hard way with the Cambridge-Analytica scandal. Facebook was accused of invading its users’ privacy by sharing their data with business partners without their consent nor knowledge. Mark Zuckerberg’s U.S. Senate testimony on 10 April 2018 revealed just how ignorant senior lawmakers could be about how Internet companies work, specifically their business model that largely depends on collecting and sharing user data.
Technology and Inequality
Risse believes it is an excellent time to reflect on how many instances technology created the potential for or inadvertently succeeded in creating inequality in society that impacted human rights. We need to keep in mind that those technology manufacturers or expert users can command higher wages. And AI creators? They will only become more expensive technology creators.
While technological change is not suitable for everyone, it is still good for society and humanity. More jobs can be created compared to the number of workers displaced. But that would require a radical overhaul of the educational system to make people more competitive.
Amid this backdrop, should we worry that AI will widen the technological gap, rendering millions of workers redundant? In the U.S. alone, automation and robotization are already replacing middle-aged workers in jobs that they thought were for life. Today, only a tiny percentage of the population is immune from falling into poverty due to bad breaks beyond their control.
Conclusion
Risse’s paper showed the various human rights challenges arising from increased AI application. We are already feeling some of these issues, but the rest need to be put on agenda now, even if they have yet to be realized. We may not know it, but rising inequality brought on by AI developments might be the bane of the UDHR.
Afterword by Mathias Risse:
Professor Risse is now working on new research dedicated to human rights in the age of digitalization, technological advancements, and artificial intelligence. Below are several thoughts he shared with us.
On human rights in the digital lifeworlds:
We clearly inhabit digital lifeworlds now and must adjust the human-rights project to protect human life as it unfolds, albeit also with an eye on what the future might bring.
On Life 3.0 (the term proposed by Max Tegmark in his book “Life 3.0: Being Human in the Age of Artificial Intelligence”):
If a full-fledged Life 3.0 emerges, it will do so from within digital lifeworlds. It might be populated by genetically enhanced humans, cyborgs, uploaded brains, as well as advanced algorithms embedded into any manner of physical device.
On human rights in Life 3.0:
In Life 3.0 human rights must be reconsidered. They were meant to protect against threats from other humans when the only other intelligent life around was other animals that had arisen alongside humans in the evolution of organic life. Amazing adaptation to niches notwithstanding, other animals are inferior to humans in general intelligence.
If Life 3.0 does arise, human rights would also need to secure a moral status potentially threatened by synthetic life of a possibly enormously larger intelligence. In the domain of epistemic rights this would involve a right to the exercise of human intelligence.
On new kind of human rights:
The point of a fourth generation of human rights is to protect human life in light of ongoing technological innovation, but then also in the presence of new kinds of intelligence.
In the long run, if indeed we progress into Life 3.0, we need a new kind of human right, to the exercise of genuinely human intelligence. To the extent that we can substantiate the meaning of human life in the godless world science describes, we can also substantiate such a right vis-à-vis artificial intelligences. If it comes to that, we must hope that such arguments can persuade a superior intelligence, and that such intelligence will participate in shared normative practices. But such intelligence, by definition, would be vastly beyond ours, and thus is hard to anticipate for us.
—
About Mathias Risse
Mathias Risse is a German-born professor of Philosophy and Public Policy at the John F. Kennedy School of Government at Harvard University. Currently he is actively working on topics in the human rights domain, especially at the intersection with technological innovation. You can learn more about professor Risse’s work on his website.