When the Capitol Hill occasion occurred in early January, many who entered the realm posted photos and movies of themselves on social media.
Like clockwork, hackers leveraged a bug in Parler to obtain the entire social media platform’s contents. However, a lot to the hackers’ shock, numerous the content material included geolocation metadata that positioned the right-wing posters within the Capitol Hill occasion simply days earlier.
Later, an internet site referred to as Faces of the Riot positioned an enormous array of greater than 6,000 photos of faces on the scene, which had been then recognized by way of face recognition software program.
Whatever the political dimension, new attitudes and functions of face recognition software program are shedding mild on the potential risks of opposing safety to privateness, and the place the world will take us as know-how evolves for a future turning into more and more laborious to think about.
A world the place anybody can use facial recognition
The one that ran the Faces of the Riot website told Wired that he and a cocreator had been working to wash “non-rioter” faces from the database — together with police and press who had been on location. The web site additionally has a disclaimer on the high, warning customers to not have interaction in vigilante investigations and inspiring customers to report folks they know to the FBI (with a hyperlink included).
“In the event you go on the web site and also you see somebody you recognize, you would possibly be taught one thing a couple of relative,” stated the positioning creator to Wired. “Otherwise you could be like, oh, I do know this individual, after which additional that data to the authorities.”
Nonetheless, this places unusual residents ready to police and report each other to federal or native authorities with probably incriminating location particulars — with out the consent of the folks behind the faces. It isn’t laborious to think about conditions moreover a Capitol riot the place the non-official use of facial recognition tech can problem conventional concepts of privateness in the case of location, identification, and different digitized types of private data — however this is not at all times dangerous.
In the city of Portland, a data scientist and protestor named Christopher Howell is involved in the development of facial recognition systems to use on Portland police officers who aren’t identified to the public, according to a blog post on MIT’s official website.
This is significant because it puts facial recognition software — a powerful technology conventionally under the exclusive purview of government and private officials — in the hands of citizens in a context where police are often alleged to commit crimes.
Canadian government investigates police use of facial recognition for mass surveillance
While citizens use facial recognition to police the police for alleged criminal behavior during protests, governments are already taking action against police departments for their use of the technology. Canada’s privacy commissioners recently declared that Clearview AI’s facial recognition is essentially mass surveillance, and urged for the company to delete Canadian faces from its database.
Clearview AI scrapes photos from social media and other public sites for use via law enforcement, according to the Canadian Commissioner, Daniel Therrien — which is “illegal” and engenders a system that “inflicts broad based harm on all members of society, who find themselves continually in a police lineup,” he added.
The commissioners also released a report following a year of multi-agency investigation into Clearview’s practices — which found the company had collected highly-sensitive biometric data without permission. The report also said the company “used and disclosed Canadians’ personal information for inappropriate purposes.”
Sweden declares police use of facial recognition ‘unlawful’
In Sweden, the local data protection authority called IMY recently fined the police authority more than $300,000 for the unlawful use of Clearview’s software — which violated the country’s Criminal Data Act.
“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI,” read a press release, according to a Tech Crunch report. “The Police has failed to implement sufficient organizational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act.”
“When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” added the Swedish data protection authority, in the press release.
Minneapolis bans police use of facial recognition software, including Clearview AI
Also this month, the city of Minneapolis voted to ban the use of facial recognition software for its police department, adding itself to a growing number of cities enacting local restrictions on this controversial surveillance technology.
This means the Minneapolis Police Department is banned from using facial recognition technology — including software from Clearview AI, which has cultivated relationships with federal law enforcement agencies, private companies, and several police departments in the U.S. — namely, the one in Minneapolis.
Privacy advocates have raised concern about AI-powered face recognition systems, and how they not only disproportionately target specific disadvantaged communities, but act as a means of continually identifying mass populations whether or not they want it.
Face recognition tracks at-risk people amid COVID-19
During the COVID-19 crisis, global companies have implemented face recognition software on unprecedented scales to identify at-risk people in busy centers of the U.S., China, the U.K., Russia, China, and more. And face recognition software has evolved to identify people wearing medical masks.
An April 2020 survey of 1,255 U.S. citizens showed that 89% of adults are in support of personal privacy rights, with 65% in strong support. This contrasted sharply with the 52% of adults who believed personal privacy trumped the added “security” of face recognition software before the coronavirus crisis.
The notion of visibility layers is relevant to the creation of new and increasingly paradoxical applications of face recognition technology. For a long time, one-way surveillance of the people by governments and companies was the norm, but now “everyday people” are using advanced AI technology for their own interests.
Face recognition transforms the notions of privacy and accountability in paradoxical ways. As the COVID-19 crisis recedes in the next few years, the dangers facial recognition may pose to privacy and human rights will likely multiply. But hopefully, states of emergency will cease to be a constant feature of daily life, and allow a more nuanced rollout of AI-assisted face recognition software.