Support the news
With Meghna Chakrabarti
Facial recognition technology is rapidly developing — but so are the methods for tricking and subverting it. We take a closer look.
From The Reading List
The Economist: "As face-recognition technology spreads, so do ideas for subverting it" — "Powered by advances in artificial intelligence (ai), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it. Some banks employ it to verify transactions. Supermarkets watch for under-age drinkers. Advertising billboards assess consumers’ reactions to their contents. America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the police state China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.
"A backlash, though, is brewing. The authorities in several American cities, including San Francisco and Oakland, have forbidden agencies such as the police from using the technology. In Britain, members of parliament have called, so far without success, for a ban on police tests. Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently during protests in Hong Kong, by pointing hand-held lasers at cctv cameras. to dazzle them (see picture). Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly.
"Face recognition relies on machine learning, a subfield of AI in which computers teach themselves to do tasks that their programmers are unable to explain to them explicitly. First, a system is trained on thousands of examples of human faces. By rewarding it when it correctly identifies a face, and penalising it when it does not, it can be taught to distinguish images that contain faces from those that do not. Once it has an idea what a face looks like, the system can then begin to distinguish one face from another. The specifics vary, depending on the algorithm, but usually involve a mathematical representation of a number of crucial anatomical points, such as the location of the nose relative to other facial features, or the distance between the eyes.
"In laboratory tests, such systems can be extremely accurate. One survey by the NIST, an America standards-setting body, found that, between 2014 and 2018, the ability of face-recognition software to match an image of a known person with the image of that person held in a database improved from 96% to 99.8%. But because the machines have taught themselves, the visual systems they have come up with are bespoke. Computer vision, in other words, is nothing like the human sort. And that can provide plenty of chinks in an algorithm’s armour."
The New York Times Magazine: "How to Thwart Facial Recognition" — "'Why not give the camera what it wants, which is a face?' says Leonardo Selvaggio, an interdisciplinary artist. Just don’t give it your face. To enable people to obfuscate facial-recognition software programs, Selvaggio, who is 34 and white, made available 3-D, photo-realistic prosthetic masks of his own face to anyone who wants one. He tested the masks by asking people connected to him on Facebook to upload pictures of themselves in the prosthetic: It didn’t matter if they were skinny women or barrel-chested men; short or tall; black, brown, Asian or white — the social network’s facial-recognition software recognized them as Selvaggio. 'There’s nothing more invisible to surveillance and security technology than a white man,' he says.
"Selvaggio thought up the project, which he calls URME Surveillance, when he was living in Chicago, where law-enforcement officials have access to more than 30,000 interlinked video cameras across the city. He wanted to start conversations about surveillance and what technology does with our identity. He knew that researchers have found that facial-recognition software exhibits racial biases. The programs are often best at identifying white and male faces, because they have been trained on data sets that include disproportionate numbers of them, and particularly bad at identifying black faces. In law-enforcement contexts, these errors can potentially implicate people in crimes they didn’t commit."
Fortune: "Here’s a New Way to Trick Facial Recognition" — "The shape of your face is as distinct as your fingerprint. That’s why a growing number of organizations — from police forces to schools to Wal-Mart — are using facial recognition software to identify you in online photos and in real world locations.
"But facial recognition technology is beginning to pose a major privacy threat, which has led researchers to explore ways to counteract it. One of them is Joey Bose, a computer engineering student at the University of Toronto.
"Bose claims he has developed a tool to 'break' facial recognition systems by adding extra elements to photos before they are uploaded to the Internet. The photos don’t look any different to the naked eye, but the hidden features thwart detection systems."
Stefano Kotsonis produced this segment for broadcast.
This segment aired on August 19, 2019.
- San Francisco Bans Facial Recognition Tech Over Surveillance, Bias Concerns
- FaceApp: Age Your Photos — And Compromise Your Privacy?
- Users Can Sue Facebook Over Facial Recognition Software, Court Rules
- The Debate Over Facial Recognition Technology's Role In Law Enforcement
- Somerville Bans Government Use Of Facial Recognition Tech
Support the news