2020 continues to bring us surprises. If there’s one thing that would aptly define this year, it would be the year of protests, with facial recognition surveillance being considered a potential solution to curb unrest.
The latest Black Lives Matter protests, which drove the movement to become the largest in the U.S. and came after the deaths of George Floyd, Breonna Taylor, and many others, made it global. People are no longer content with just using the hashtag #BlackLivesMatter on social media. Some government officials’ callousness regarding racism and human rights drove them to bring their protests to the streets.
With thousands of incidents simultaneously happening across multiple cities in the U.S. and around the world, some of which have devolved into violence, law enforcement agencies are “forced” to use technology to stop anonymous protests.
Will facial recognition surveillance give law enforcers the upper hand?
What Is Facial Recognition Surveillance?
Facial recognition surveillance uses biometric software to determine a person’s facial features through an image or a video for identification purposes. It compares the picture to a database to identify the person in question. It is currently being used by law enforcement agencies to locate and identify suspects and witnesses.
Does Facial Recognition Invade Privacy?
Now that most police departments rely on facial recognition surveillance against protesters, many are touting its capability to invade someone’s privacy.
In the past, police used facial images captured on their body cams. Over time, they have resorted to more aggressive means. An example would be using Clearview AI, a facial recognition technology that uses images gathered from the social media pages of persons of interest. The provider partnered with police agencies to catch criminals. But this particular software can also be used to implicate innocent people because it is not perfect. It can be used to tag an individual as a person of interest without 100% accuracy. Worse, it can become a weapon against known critics of specific agencies. It will be quite easy for them to build a database and share among agencies, leaving persons of interest without much choice.
One example of this is the deployment of facial recognition in airports. While it helps governments identify overstaying individuals, identification remains inaccurate.
Should Facial Recognition Be Banned?
Whether facial recognition should be banned would be a long and hard-fought battle because it serves an essential purpose—it can help law enforcement agencies curb the rising number of crimes. Although some states like San Francisco, Oregon, and New Hampshire, banned the use of facial recognition in body cameras, no federal regulation exists to balance the use of the technology with citizens’ First and Fourth Amendment rights.
It may be high time for the government to look into facial recognition and how it may be invading people’s right to privacy. Regulations must be implemented to ensure that its use is not abused.
Can Facial Recognition Help Governments Curb Protests?
The quick answer is yes. And it is, in fact, already doing so.
As early as February this year, the Minneapolis police have started using Clearview AI. One Zero, the science and tech publication of Medium, pointed out several prominent facial recognition applications as well. The publication also noted that many police departments, particularly those in Austin, Seattle, and Dallas, and even the Federal Bureau of Investigation (FBI), requested copies of images taken from demonstrations.
While they claim that the footage would be utilized to curb the growing incidence of violence during protests, the public remains unsure of how police departments are actually using facial recognition, especially since most algorithms have confirmed limitations in identifying people of color.
Can Facial Recognition Technology Work on People Wearing Masks?
Can face coverings provide anonymity? There is no single answer to this question. Several companies that sell facial recognition software, such as China’s SenseTime or Russia’s NtechLab, among others, claim their solutions are able to recognize masked people.
“Our algorithm can recognize faces with a significant overlap of up to 40%. This includes faces partially covered with medical masks, head kerchiefs, motorcycle helmets and elaborate headgear as well as people turned to the camera in profile,” a spokesperson for NtechLab told TASS, the largest Russian news agency.
Such claims can’t be easily verified, though, since they usually come from internal data, and there’s no third-party validation.
Moreover, a new study by the US National Institute of Standards and Technology (NIST) found that wearing face masks actually makes it harder for facial recognition algorithms to identify individuals. Error rates jumped to anywhere from 5 and 50 percent, depending on an algorithm’s capabilities. However, the study only focused on the algorithms developed before the pandemic.
“We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind,” said Mei Ngan, a NIST computer scientist and an author of the report.
While facial recognition can stop violent protests, it can also be weaponized by people in power. That said, it may not be the solution, as it can incite fear and chaos.
Companies that are currently developing the technology like Amazon and IBM recognize facial recognition software’s potential for abuse. As a result, IBM announced that it is no longer pursuing technology development. At the same time, Amazon shared in a blog post that it will not allow police departments to use its software for their operations.
Instead of curtailing people’s rights, facial recognition may find better use in avoiding abuse of power, inequality, and racism. It would also be great if lawmakers study the technology and come up with regulations.