Support WBUR
The limits of the surveillance state

The New York Police Department says it utilizes the largest networks of cameras, license plate readers and radiological sensors in the world.
So, how did the UnitedHealthcare CEO’s alleged killer manage to escape the city before his arrest in Pennsylvania?
Guests
Faiza Patel, senior director of the Liberty and National Security Program at the Brennan Center for Justice.
Transcript
Part I
MEGHNA CHAKRABARTI: When UnitedHealthcare CEO Brian Thompson was gunned down in New York City on December 4th, security cameras caught the entire murderous act as it happened. That footage was subsequently released by the New York Police Department. The shooter, wearing a hoodie, mask, and backpack, steps out from behind a parked car.
Calmly points his gun at Thompson's back and shoots twice. As since then, law enforcement officials identified 26-year-old Luigi Mangione as the alleged killer and arrested him in Pennsylvania. But that initial security camera footage isn't the only surveillance NYPD has of the shooter.
Late last week, police released new information on how they believe he got out of New York City.
CBS NEWS: Investigators believe Luigi Mangione rode a bike through Central Park after allegedly killing Brian Thompson, then took a cab to a major bus terminal in Upper Manhattan just to go back downtown, possibly taking the subway to New York's Penn Station.
From there, investigators question whether Mangione fled to Pennsylvania by train.
CHAKRABARTI: That report from CBS News is Jarred Hill, and what's interesting about the report is that each stop Hill identifies, that bike, the taxicab, Penn Station, are all accompanied by surveillance images of Mangione. Which makes perfect sense, because New York City is one of the most surveilled cities on the planet.
In fact, here's how NYPD describes a part of their surveillance system themselves. Quote:
The New York City Police Department has a tool, developed with Microsoft, that utilizes the largest networks of cameras, license plate readers, and radiological sensors in the world.
End quote, and yet somehow, even with that tech dragnet, always on and always watching, Mangione somehow managed to make it to Altoona, Pennsylvania, and wasn't arrested until a sharp-eyed McDonald's customer saw him, told a worker and that worker called 9/11.
All of which happened five days after Thompson's murder. So what is the purpose of all that mass surveillance in New York? What are its advantages? When does it work? What are its limitations? As shown by that five-day manhunt for Luigi Mangione. Faiza Patel joins us now. She's senior director of the Liberty and National Security Program at the Brennan Center for Justice.
Faiza Patel, welcome to On Point.
FAIZA PATEL: Hi, Meghna. Thanks for having me.
CHAKRABARTI: So first of all, can you bring me up to speed insofar as what you think is pertinent or most salient in terms of the information NYPD says it gathered on Mangione through surveillance over the past week or so?
PATEL: Sure. I think what's really salient here is for people to understand that technology is not magic.
Sometimes it works, sometimes it doesn't. And that there is a very large human element in how technology is used and how well it operates in particular circumstances. So let's take the case of the images of Luigi Mangione that have been circulating across the media. You have a photograph from Starbucks, you have a photograph from when he's in and out of a taxicab, you have a photograph from a hostel, sorry, and then you have a number of these kind of blurred images, which seem to be taken from street cameras, likely NYPD cameras.
So you have a great deal of variation in the kinds of photographs that are available of Mangione. And if you compare those photos to, for example, how you take a photograph of an individual, right? If you're taking a picture, you take a picture head on, right? You're not, sometimes you take it from the side, but normally you're trying to get a person's sort of full face in it, and these images are not that, right?
And that, I think, brings us to one of the first limitations of facial recognition technology, which is what is, the sort of way in which the police will identify an unknown suspect, right? So here they would take the photographs, and they would run them through the NYPD's database which has, I think, it has several million images in it.
And those are arrest photos and parole photos. Now you've got sort of two fault points here, right? So one, is the image quality good enough to be able to match with the database or the library. The second qualifier is, does the library have this guy's photo in it? And if he hasn't been arrested in New York, and he hasn't been on parole, it's not going to have that photo, so they're not going to get a match.
That's not the only option, though, that the police have. They have two other options. So one is, at least in around 2018, 2019, that timeframe, the NYPD was testing a contract with Clearview AI, which you may remember is a well-known company who has come under fire and has been fined and even banned in several countries, which scrapes images from the website.
Just anybody's images, yours, mine, people who are completely private figures and has created a database of what it says are 50 billion images. That's a huge amount of images, right? So at least at one point, NYPD could have run his image through a Clearview AI's database. Another option that the NYPD has is that it can request the FBI. So the FBI has a very large database of images taken from driver's licenses. All kinds of sources. And as of 2019, which was the last number I've seen, they had something like 640 million images, still not Clearview AI, but it is a pretty big database.
So those are like the three options that the NYPD has.
CHAKRABARTI: Yeah. Okay, if this can, we just can't just jump in here for a second. Because I want to be able to very surgically go through each of these options as you lay them out for us. But before I do that, I want to be clear to listeners that we did definitely reach out to the New York City Police Department to see if someone could speak with us or if they could make a comment, if they had a comment or a statement to make, regarding the issues that we're raising in this hour.
NYPD did not respond to our requests, wish they had, but they did not. So first of all, let me just back up here for a second, because the very first thing that you said is quite important. That technology does not equal magic, right? And the reason why I think this is so fascinating to learn about in concrete detail is that I think the general public has been habituated into thinking that technology, especially while law enforcement uses it, is a kind of magic, right?
Because I was literally recalling an episode of Law and Order [SVU]. From 15 years ago, where there was a terrible crime committed, but they all gathered around a computer, they had one license plate and a face and through some like magic system, they were able to trace a credit card. And then that credit card was, it was connected to a MetroCard, and then that MetroCard was pinging in different places, and then of course, the credit card was also linked to like a tolling system.
A mechanism in the suspect's car and like within half a day, they caught the person. Now that's Hollywood, but I think a lot of people do feel like it should be that fast. Which is why this five-day lag in arresting Mangione and that only coming after somebody, an actual just human recognized him, seems to, it seems to call into question what good is all of this surveillance, if ultimately at the end you rely on an upstanding citizen to say, I think that guy is the guy I saw on TV.
PATEL: I think that's the sort of third piece of this too, right? You have to identify someone, but then you also have to find them, right? And if somebody is clever and doesn't use their credit card and doesn't drive a car, whose license plate you could recognize, but rather, uses public transport, or in this case, I guess a cab and some kind of public transport, and you can still pay in cash, in most of these places.
So there are ways to avoid being found. So I think that's also a piece of it, right? There's the image, there's whether or not you have a digital image that matches that image. And then even if you identify the individual, to actually locate the individual.
So those are like three separate phases, and each of those has fault points. And I think we've seen those, at least it seems from the outside, that we've seen those kind of play out in this story as well.
CHAKRABARTI: Do you think that those fault points, now, as laid bare, as you're saying, with the Mangione case, call into question how much money is going into this kind of very high-tech law enforcement surveillance.
PATEL: I think the efficacy of the systems that the NYPD and other police departments have in place has never been properly tested, right? You do hear, and I feel like this has to be true, that facial recognition has helped the NYPD. And other police departments and the FBI investigate crimes and even apprehend suspects.
At the same time, the question is what is the cost of this technology? And to tell you the truth, we don't know how much the NYPD spends on these technologies, right? The NYPD's budget is notoriously opaque. So it's not that you can draw it and say in 2022, the NYPD spent X hundred million dollars on surveillance technology.
You don't have that kind of granular information available from the NYPD. You also have only very limited transparency from the police department about its surveillance technology. Now, in 2021, the New York City Council passed a law called the Post Act, which requires the NYPD to make annual disclosures about its surveillance technologies, and to include also impact statements about, who is this technology most affecting.
Because one of the big concerns around many surveillance technologies, including facial recognition, is this issue of bias, right? There's been a concern that well, there has, it has been documented that facial recognition algorithms tend to work better on white or Caucasian faces, and on men than they do on people of color and particularly Black people.
If you look at the wrongful arrests that have been made, based on facial recognition technology over the last couple of years, which have received a lot of media coverage, there have been six arrests that have been reported and all six of them are Black.
So you also have this racial dimension playing into it.
Part II
CHAKRABARTI: We're talking about the vast surveillance system or systems that are in place in New York City, run by the New York City Police Department, and their efficacy.
And their limits, as shown by the five-day long manhunt for Luigi Mangione. By the way, here's a little summation of the route that that alleged killer Mangione took on the morning of December 4th when UnitedHealthcare CEO Brian Thompson was murdered. This is a route as compiled by security images obtained by CNN.
CNN: At 6:17 a.m. police say a camera at a nearby Starbucks shows the suspect buying a bottle of water and two energy bars. Two minutes after that, at 6:19, a surveillance camera near a deli on West 55th Street appears to show the suspect walking. and briefly stopping at a pile of trash.
11 minutes later, 6:30 a.m. surveillance cameras pick up what appears to be the gunman on the phone. You can see a potential witness walking right behind him. 6:44 a.m. The tragic moment. UnitedHealthcare CEO Brian Thompson leaves his hotel and crosses the street. He walks towards the Hilton Midtown. You can see the suspect wearing a backpack walk up right behind him.
Police say the gunman shot Thompson in the back and leg. Then seconds later, the suspect crossed the street and went through an alleyway between 54th and 55th streets. Police say he then got an electric bike and headed north on Sixth Avenue. Four minutes later, police say a camera spots a person believed to be the suspect riding an electric bike in Central Park.
12 minutes later, 7 a.m. about 30 blocks away from a Nest camera. More video of what appears to show the suspect riding on West 85th Street. But now without the backpack.
CHAKRABARTI: Once again, that's from CNN from the security camera footage on the day of the murder. Okay, so let's go back in time, though, because Faiza, as you said, there's been quite a bit of development and installation of lots of different kinds of mass surveillance systems in New York City.
There's one that you had specifically mentioned back in 2012. So here's a bit of tape from it. This is August 8th, 2012. Then Mayor Michael Bloomberg announces the launch of a new, quote, real time crime prevention and counterterrorism technology solution.
MAYOR BLOOMBERG: Today we're announcing the full launch of the Domain Awareness System.
This new system capitalizes on new powerful policing software that allows police officers and other personnel to more quickly access relevant information gathered from existing technology and help them respond even more effectively. In other words, we're finding new ways to leverage already existing cameras, crime data and other tools to support the work of our investigators, making it easier for them to determine if a crime is part of an ongoing pattern, and it will allow the NYPD to better deploy its offices.
CHAKRABARTI: Then Mayor Michael Bloomberg in 2012 touting the Domain Awareness System, which I understand at that time cost NYPD $40 million. So Faiza, you said a little bit about it. Tell us more. What exactly is the Domain Awareness System? Who did NYPD contract with or partner with to install it? What was its intended use case?
PATEL: So the Domain Awareness System was developed in conjunction with Microsoft, I believe. And what it does is that it pulls in different kinds of information, right? So it's gonna pull in arrest data, it's gonna pull in summons data, it's gonna pull in warrants, outstanding warrants. And then also if somebody calls in, say, to 911, those reports will be pulled in.
It will pull in the location of those calls sometimes. And then it will also pull in for individuals, any license plate information, right? So if you're caught going through one of the bridges or tunnels that you have to pay any associated address information, phone number, date of birth, whether or not you have a gun permit.
So all of this kind of information is pulled together. Now the theory here is that when you pull all this information together, you're going to be able to do sort of two things, right? So one is, and the camera feeds, of course, which is critical to this, right?
This all started with in downtown Manhattan after the 9/11 attacks when a network of cameras, mainly in sort of private businesses, et cetera, was fed into the NYPD's DAS system. So you have all of this information coming in. And the theory is that this can be used to do two things. So one is that it can be used to prevent terrorism.
Presumably, if an NYPD officer is monitoring the system, and he or she sees something that causes alarm bells to go off, they can then pull in additional data. If they can identify the individual who's doing something of concern and quickly, find out who they are and what they've been up to, if they're in the system, right?
The second thing that it's supposed to do is to help solve crimes. So the idea being that, if you have if you have a potential suspect, say you have a photograph, for example, right? That you could then, if you can come up with that person's identity, or you can correlate them with crime complaints, et cetera.
So the idea is that all of this data is going to help you solve crimes, when you don't know who's carried out the crime, or the suspected crime. So that's the theory of this.
CHAKRABARTI: ... Can we just pause there? Oh, maybe you are about to answer the question I'm going to ask.
Because, okay, so critically you're saying, in theory, that is how the Domain Awareness System is supposed to work, right? Just to underscore, and to be sure I heard you correctly, to help solve crimes when you're not exactly sure who did it. Is that right?
PATEL: Yeah.
CHAKRABARTI: Okay. Good.
PATEL: That's how I haven't seen it in operation.
So it's actually unfortunate that the NYPD didn't send somebody, because they could probably explain it better than I could, because I'm looking at documents and reports.
CHAKRABARTI: One hundred percent. I, once again, they just simply did not again, for total transparency to listeners, they did, NYPD did not respond to our requests.
But so in this case though, I guess where I was going is you had said earlier that the efficacy of these systems has never properly been tested, even.
PATEL: I think when you look at efficacy, there are, it is, however, a very complicated issue, right? How do you test efficacy? The place where system efficacy has been most tested has been with facial recognition software, right?
The National Institute of Science and Technology, which sits in the Commerce Department and is the premier testing place in the United States, has conducted a number of tests of facial recognition technology starting, I think, they did a big report in 2013. They did another one in 2019 and then did some follow ups in 2022 and then in 2024.
And what that has showed systematically, particularly between 2013 and 2019, that facial recognition technology has improved dramatically, right? So that the kinds of error rates that you're looking for the best algorithms. For the best tool on the market, are below one or 2%. So it has become more and more accurate over time.
At the same time, it has to be, you have to acknowledge, the limits of the kind of testing that NIST performs, right? So for one thing, it's voluntary testing, right? So you decide as a company, whether you want your algorithm to be tested. And the system that NYPD reportedly uses, which is something called data works, I believe, was not amongst the list of tested systems.
So we cannot tell you sitting here, does the NYPD have a state-of-the-art facial recognition system or does it not? There's a huge variation in vendors as to accuracy. And so think that's one thing.
CHAKRABARTI: On that point, one of the reasons why we don't know, and this is where I want to talk with you about the Post Act, right?
Is, according to the Surveillance Technology Oversight Project, that definitely looks very closely at surveillance, electronic surveillance specifically in New York, they're New York based, they gathered a lot of FOIA data, essentially. And say that NYPD up until 2020 purchased nearly $3 billion in secret surveillance equipment that they say had been previously hidden from the public. Because NYPD were filing those expenses under or those purchases under a quote, special expenses program, which avoided scrutiny.
And that was one of the things changed by the Public Oversight of Surveillance Technology Act.
PATEL: No, I don't think the Post Act does not look at financial declarations. Disclosures. The Post Act is a kind of basic transparency measure that says, tell us what technology you're using.
Tell us what your standards are. Tell us what rules you have in place to make sure the technology isn't abused and tell us what it also actually does require efficacy. But I don't think the NYPD has ever really provided answers on that. An impact, right? Who is it impacting?
Because racial bias is a huge concern when it comes to surveillance technology, just as it is with policing generally.
CHAKRABARTI: Okay so then, in that case, the Post Act is asking for a certain amount of oversight, but to be clear, you're saying that NYPD is simply not complying with that?
PATEL: I think two things are happening, right?
So the NYPD is, it's putting out its Post Act required statements, et cetera, but the statements themselves are often inadequate, right? And they're not sufficient to allow any kind of real oversight of this technology. So the NYPD inspector general, for example, I think this was last year, did an audit and they found that, really, the NYPD was not, did not have, was not providing the kind of data that they needed in order to evaluate its use of technology.
So you have a huge transparency gap over here. Now, when the Post Act was being passed, I remember the NYPD. I don't think it was the police commissioner. It was their chief of counterterrorism, I believe, went on MSNBC and said with this Post Act, you're just going to give terrorists the tools they need to avoid surveillance, which is a joke. Because the amount of transparency we're getting out of the NYPD on based on the Post Act is quite limited.
We have an overview of what kinds of roughly the technology they use. And there are, I have to say, there's some important things in there. But, with technology, the devil is in the details, as we were talking about with facial recognition, right? It all sounds so easy when you see it on TV, but the reality is much more messy.
And so without having a better understanding of exactly what technology, for example, the NYPD uses in a particular scenario, it is difficult to evaluate either its efficacy. Or, whether the PD has enough safeguards in place to prevent its abuse and three, what its racial impact is.
So all of these things really do come down to digging into the details of the technology.
CHAKRABARTI: Now, Faiza, you've actually pointed out something which is extremely important, and that is, New York City it's one of the most vibrant, biggest, most active diversities in the world, right?
And it's also, it was also the target of the worst terrorist act on U.S. soil in American history. So counterterrorism is a major part also of what the NYPD has to work on to prevent terrorist attacks. And to that effect, I understand that a lot of this technology that's being used was.
For example, some of it was developed for Iraq and Afghanistan. And so I wonder if there's an argument to be made that these technologies have been tested for efficacy, just not on U.S. soil.
PATEL: So I think that I would make the opposite point actually, which is that technology that may be appropriate for a battlefield.
Is not appropriate for an American city, and that's because the standards generally in a battlefield are much lower. Take, for example, use of force, right? The police and civilian police departments have a much higher threshold, at least in theory, for using force than the military does.
So you don't have the same constraints. On a battlefield, you're not operating within a framework of a constitution, right? And people's privacy rights, people's right to First Amendment rights to gather and not be picked up and targeted, on the basis of participating in protests, you have particular laws that apply, right?
Civil rights laws, for example. So you have a framework within the United States that is not the framework that applies in the context of a war. You have the laws of war that apply in that context, but they are not nearly as constraining as the legal framework that's applicable on domestic soil.
So I would say that. The second, and that kind of relates to your efficacy point, because you're looking for efficacy in a particular context, right? You're not looking for a sort of efficacy writ large. And I think that is in fact one of the big concerns even about the NIST studies on facial recognition technology.
And there was a report earlier this year by the U.S. Civil Rights Commission in which they pointed out that NIST had done these trials, but that NIST did not, in any way, replicate the real-world conditions in which facial recognition technology is deployed, right? Which are very diverse. If you are testing a facial recognition algorithm using mug shots, right?
Straight on profile, right? Like you saw in the newspaper of Luigi Mangione, right? That's one thing, right? The algorithm's going to be much better at those kinds of shots. Similarly, if you look at when you try to open your phone using facial recognition technology, that's going to be pretty accurate.
It's just trying to do a one-to-one match. But what police departments are usually doing is taking photographs that are not particularly good, whether they're taken from an ATM, or Starbucks or a hotel camera, right? They're not the kind of full-frontal photographs that you're seeing. So efficacy has to be tested in the context in which the algorithm is going to be used, for it to be truly useful in assessing the technology.
CHAKRABARTI: Okay. As we head towards the next break, there's one more moment from back under the Bloomberg administration in New York where he's talking about the Domain Awareness System.
This is former Mayor Michael Bloomberg.
BLOOMERG: Those systems include a network of cameras, many provided by private businesses in finance, banking, telecommunications, and other industries that are programmed to sound an alarm if they spot anything suspicious, such as an unattended package at the entrance of a building.
And most of those cameras are in Lower Manhattan and Midtown Manhattan. The center also includes 2,600 radiation detectors that have been distributed to NYPD officers on patrol, as well as more than a hundred license plate readers that are in place at bridges, tunnels, and streets. And several dozen mobile license plate readers are also deployed on the city's police cars, allowing suspected automobiles to be tracked in real time.
Part III
CHAKRABARTI: We're talking about how the manhunt and arrest of Luigi Mangione, how that case shows the limitations of the mass surveillance systems that are run in New York City by the New York City Police Department.
And to repeat once again, we did reach out to the NYPD to see if someone from the department could join us, or if they would answer questions that we had or even provide a statement. We did not hear back from the NYPD. Faiza, I want to talk a little bit more. We talked about the Domain Awareness System from back in 2012, but moving forward in time, you had mentioned Clearview, right?
Because facial recognition comes up a lot in this conversation, and I want to spend a minute talking about Clearview in more detail. So remind us, Clearview AI was, is essentially a company whose technology was pretty widely embraced by law enforcement agencies in multiple places. And as you said, they were scraping just people's images just anywhere from the internet.
PATEL: Yeah, pretty much. Venmo. I didn't even know Venmo had pictures, but Facebook, Instagram, all the social media. Platforms they were basically scraping images without the consent of the individuals whose images were put into their database and their database has grown very dramatically over the last few years, right?
I remember I believe it was, there was an article in the New York Times that first talked about it, maybe three, four years ago, and they had three billion images. And then the next number I saw was 30 billion images. And now the most recent number I've seen is that they have 50 billion images in their database.
And according to Clearview, I think some 3,000 police departments use its technology. That's about one in six out of all police departments in the United States. So it has a very large presence in this country.
CHAKRABARTI: And the promise of the technology or the alleged promise is that if you are looking for a particular person who, let's say a surveillance camera caught at the scene of a crime, but you don't have that person's face on file already, and the FBI and their comparative meager 640 million images doesn't have it either.
You could do a quick search with Clearview. And voila! Identify the person.
PATEL: Yep. That's the promise.
CHAKRABARTI: Okay. Just to describe to listeners how problematic this was, there was a huge case against Clearview, right? And I believe just this past summer, a proposed settlement was made public in June that would pay damages to a class of members in this class action lawsuit, that said that they were, their privacy essentially was violated. But just a couple of days ago, I'm seeing here, Reuters reported on December 13th, that 22 U.S. states and the District of Columbia are telling a judge that they oppose the settlement and do not think that the privacy issues have been resolved by the settlement.
So that's still going on with Clearview AI. Do we know if it's still in use by NYPD?
PATEL: So NYPD has said on the record that it does not use Clearview AI. On the other hand, there were FOIA documents that were released that show that the NYPD certainly trialed Clearview AI back in 2018, 2019. And even had, officers had it on their phones, and they could just run someone's face through that. And I think they conducted some 5,000 searches. So as far as we know, based on their public statements, they don't have Clearview AI, but the FBI has access to Clearview AI, so they could, in theory, go through the FBI and get access to that database as well.
I think there's a lot of ways to get around the fact that they don't have access to Clearview AI. But I think one thing that's interesting to also, for maybe your viewers to understand, is that when you run someone's face against a database it's not going to give you the answer.
It's going to give you a list of options. Up to, I think the FBI one, for example, generates 50 options. So then you have an officer who has to actually look through those and decide, which one is the right one. So again, it's not magic. It always has this human component. But we do know that the NYPD has used, like other police departments, has also used facial recognition technology and even drones to monitor protests and the like.
So that, I think, is something that's really worth thinking about. Because we've spent all this time talking about, what are the limitations of facial recognition technology, which I think the UnitedHealthcare case illustrates very well. But then there's, I think, the whole kind of, I think in some cases, even scarier issue of what about if facial recognition technology works really well all the time, and we have it everywhere, and I think that piece of it is also really important for us to think about.
CHAKRABARTI: But again, we don't know, because there's still limited transparency in terms of how, why, when, and to what effect NYPD is using these technologies.
PATEL: True, but we do know that facial recognition technology is getting better and better, right? We know there's becoming more and more ubiquitous. More and more police departments are using it.
We also know that it is not regulated in the United States. There are a couple of jurisdictions that have banned it. There is no federal law that regulates how, when, where facial recognition can be used. There's certainly no Law in New York that would constrain the police department. So basically, you have a kind of Wild West of facial recognition technology where you have an incredibly potent and powerful technology which can be used to solve crimes, but can also be used in ways that are really antithetical to a democratic society.
CHAKRABARTI: Can I go back to what could be argued as a Catch-22 that police departments find themselves in and maintaining our focus on NYPD, because of the counterterrorism part here that we brushed off earlier. The argument that NYPD makes that if we talk too much about how well this stuff works, it's going to give potential terrorists insight into how to skirt the system.
I don't actually think that is an overblown concern, right? Because this is one of those situations in which a single failure is a catastrophic one, right? As we saw on 9/11. And so it seems actually quite understandable that law enforcement would be reluctant to give too much information to lawmakers or to the public about how these technologies work and when and how they're utilized.
There is some justification to that argument?
PATEL: ... Certainly, you don't want operational details from the NYPD, right? You don't necessarily need to know how they're conducting their operations. But you do, I think, in a democratic society, need to understand the capabilities, and you also really importantly need to understand the safeguards, right?
When we talk about facial recognition technology, I mentioned that it's not just a question of you put something in the computer, you get an answer, you get a series of options that an officer then has to evaluate, right? So it's important then, for example, that an officer has special training.
On how to actually utilize facial recognition results and has training on how to avoid the well-known phenomenon of automation bias. By which you're like, Oh, the computer said it. It must be right. So all of these kinds of things are really important. And then, the NYPD says that it does have like specially trained folks to look at ... facial recognition results. But the Clearview AI documents show that officers were just using it, and they weren't just the specially trained officers, they were, people just had it on their phones.
So you do need to have, some understanding of how it's being used. I think it's really important when we think about, when we think about the justifications for the technology to also spend some time thinking about what are the sort of use cases of the technology, which are clearly abusive and that we as a society don't want to see take place.
And how do we prevent them? I mentioned before that, protests, right? The use of drones and facial recognition technology to identify people at protests. This is something that has been done by the NYPD, by other police departments, by at least six federal agencies have done that.
And now what law enforcement will say is that we've done it because sometimes at protests, there's criminal activity taking place, right? January 6th is another example. And so we're using that to identify suspects, which is, it seems reasonable. At the same time, there's literally nothing on the books that's going to prevent the same agencies from doing it.
Using that technology simply to identify people who are at a protest and then surveil them or harass them thereafter. And you can easily imagine that kind of situation happening, right? Similarly, right? We know that China, for example, uses facial recognition technology that purports to be able to identify individuals who are Uyghurs.
And then they, we all know about the way that the Chinese treat the Uyghurs. Imagine like some tech CEO is wow, this administration is going to do a mass deportation effort. Why don't I sell them a piece of software that can identify individuals who appear Hispanic? Because they might be illegal immigrants.
You gotta think about what are the rules around this technology? How would we prevent that from happening? And right now, we don't have any of those rules.
CHAKRABARTI: The scenarios that you just laid out, let's add a little thought experiment to this, right? Let's presume for a moment that these technologies are actually really excellent at what you said, right?
They're identifying people at a protest, or maintaining databases on folks like that, et cetera. But at least, here we have an example of a very high-profile crime in which these surveillance systems were, as far as we know, not good at helping law enforcement very quickly track down the perpetrator of a high-profile murder, and I'm saying this sitting here at our home studio is in Boston, Massachusetts.
And, back in 2013, there was that terrible Boston Marathon bombing. And, okay, that was more than a decade ago now, so technology's gotten a lot better since then. But very quickly thereafter, there was surveillance footage of the bombers at the location where the bombs went off, and it still took a full week.
Their identities were released to the public, etc. It still took a full week. Before they were found, and they were only found after they shot up another town. Okay? And then law enforcement and state officials still took it upon themselves to lock down a million people in eastern Massachusetts in the name of finding these guys.
And they ended up finding them after the lockdown was lifted, and a homeowner in the same town where the gunfight took place early that morning found Dzhokhar Tsarnaev bleeding in his boat. So I only raised that as an example. Because it's a very visceral one to me having lived right through it, the entire apparatus of law enforcement and the surveillance systems that were available at the time were totally ineffective for the highest profile case in this region.
It does call into question how good even 11 years later all of these things are at quickly tracking down a criminal or alleged criminal, let alone preventing crime.
PATEL: I think that those are valid questions, and it's something that really does need to be studied, and unfortunately, what happens here is that law enforcement tends to be quite resistance to having evaluations of its systems done in any sort of independent manner.
That's why I think, for example, the [OIG-NYPD] is such a valuable part of our oversight system. Because they at least, again, in theory, do have the ability to dig into these systems and to take make public the sort of, the good and the bad and all of that. But I also think that we have to recognize, to some extent, how much surveillance we want, right?
For example, you talked about the Boston Marathon bomber, right? In one part, and I think this is true, also in the case of Mangione, in one sense that some of the technology worked well, right? Like we had a picture of this guy up pretty quickly, right? And so we had the photo, and it was everywhere.
You couldn't miss it. We have lots of photos. So that's one piece of it. The second thing is to be able to track somebody on an ongoing basis. If you're a police officer and you get a call, right, about Mangione, for example, that there's been a shooting in Midtown outside the Hilton.
There's this guy. This is the description. He's wearing a backpack, et cetera, et cetera. Now you're looking at all of the surveillance cameras in the area and you're trying to identify, pick out this one individual.
CHAKRABARTI: Faiza, hang on for just a second. I just have to quickly say, I'm Meghna Chakrabarti.
This is On Point. But go ahead. So you got to pick out. You got to pick out someone wearing a backpack in New York City, right?
PATEL: And the technology helps you do that, right? But it isn't going to do it all for you. So I think there has to be some, and I'm someone who spends her life being critical of surveillance technologies and law enforcement uses of them.
But I think that, I think expectations are also sometimes a little higher than they need to be, given that, again, it's not magic. It requires a lot of footwork and a lot of grunt work, as well. So I think it's important to keep that in mind.
CHAKRABARTI: So we have about a minute left and I'm wondering if given AI itself as an area of technology that's advancing, in just enormous leaps and bounds virtually every minute, how much more, I don't know what the right word is.
Comprehensive? Even invasive? Do you think these kinds of law enforcement technologies could get?
PATEL: I think the scary piece, particularly with facial recognition, is real time facial recognition, right? So right now, they can basically track you through the city, like they did with Luigi Mangione, but they don't have real time facial recognition, so it's not as if, it's not like China, where the camera sees you, the camera recognizes you, and the camera knows who you are.
On the minute, now that might be really helpful in a case like this, right? But at the same time, that, I think, is really a dystopian future, which puts way too much power in the hands of the government to track our everyday movements, to see whether we go to the mosque or the synagogue, if we go to an abortion provider, if we go to a gun store, everybody's civil liberties are at stake when the government is tracking you consistently and persistently.
This program aired on December 16, 2024.

