Skip to main content

Support WBUR

Why America isn't ready for the AI revolution

36:38
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Feb. 26, 2026. (AP Photo/Patrick Sison, File)
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Feb. 26, 2026. (AP Photo/Patrick Sison, File)

Dean Ball was a top adviser on AI for the Trump White House. He authored its AI policy. But now he says the way the Trump administration is strong-arming tech companies is a foundational threat to the nation.

Guest

Dean Ball, senior fellow at the Foundation for American Innovation. He writes the Substack Hyperdimensional.


The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:


Transcript

Part I

MEGHNA CHAKRABARTI: Until August of last year, Dean Ball was a senior advisor at the White House Office of Science and Technology Policy. There he was the primary author of the Trump Administration's AI Action Plan, which was released in July of 2025, a month before Ball left the administration.

Dean Ball has a long resume when it comes to thinking about AI and government, and more specifically, how we govern. His past posts include Stanford University's Hoover Institution's initiative on State and Local Governance, the Artificial Intelligence and Progress Project at George Mason University.

He's also been at the Conservative Manhattan Institute and is former director of the Adam Smith Society. He's now at the Foundation for American Innovation. And writes the AI focused newsletter Hyperdimensional. So with a background like that, you can understand why Ball thinks about AI, not just as a tool, but as a means to understand what he sees as the accelerating breakdown of the American political system.

He recently wrote, quote: "Open societies have drifted from the principles they once embodied, and those principles must be reimagined for modern ears if we are to embody them in the future." And he joins us now from Washington. Dean Ball, welcome to On Point.

DEAN BALL: Meghna, thank you so much for having me today.

CHAKRABARTI: Let's start off with some definitions so that we all approach this from a common point of understanding. When you're talking about the American Republic or the American political system as you see it, what specifically are you talking about?

BALL: A great question. So I think what I'm really referring to is, first of all, yes, the American Republic is typified by documents like the Constitution and whatnot, but I'm actually more specifically referring to the kind of post FDR, post New Deal, American Republic with this sort of large technocratic nation state.

As those institutions currently exist, large regulatory state, that kind of institutional complex that basically everyone alive today was raised in. I'm really referring specifically to that. That's what I think is most centrally challenged by AI, though the broader structure too of the Constitution, I think, will collide with AI in some interesting ways.

CHAKRABARTI: Okay. We'll definitely get to that, but what do you see as the most vulnerable parts of the modern American nation state, what parts are most vulnerable to AI?

BALL: I think first of all the regulatory state as it exists is predicated on this kind of notion that there are coordinating costs that the government needs to solve, right? And that's a big chunk of what, like, zoning, for example, just to take one crazy example. Zoning in cities is based on the idea that of course every property owner can't negotiate with every other property owner. But AI changes that in some important ways.

Not, maybe not exactly in zoning, but zoning might be a good example of now it's possible to imagine everyone having agents that negotiates with other agents and maybe we don't need quite so much state intervention to manage the process of zoning in cities. And then at the federal level, you might think of certain kinds of informational services that the federal government provides.

Just again, as an example, we have this large bureaucracy that does weather forecasting, meteorology. In the form of NOAA. And it's a very cool bureaucracy that I've been a fan of for a long time actually. But one wonders if all of that kind of gets disintermediated by basically deep learning models of the climate in the fullness of time.

And so there's lots of things like that.

CHAKRABARTI: So, it's interesting, Dean, because you've actually picked two examples, which I think to a lot of ears listening would say, Hey, this isn't necessarily a bad thing. If our understanding, or our ability to predict the weather became more accurate and more consistent through the use of AI.

Then, bring it on. And zoning is one of my personal favorites actually. Anything that can actually improve the process of zoning, make it, perhaps make it, reduce the friction in it. Also major improvement in governance. So these things don't necessarily spell out to me the end of the American Republic.

BALL: I agree with you. I picked; I wanted to pick two positive examples to start with. Let's, let me give you another one that maybe is more centrally challenging. So take the Food and Drug Administration and its mid-century, it is predicated on a mid-century conception of disease, which is the notion that like Parkinson's is one specific thing or cancer is one thing, but what our increasing understanding of disease is that it's really much more complex than that.

And everyone's instantiation of a particular disease is different from other people's, right? So your Parkinson's is not quite the same as my Parkinson's, right? Or the cancer that I might get, even if it's technically the same, if we both get brain cancer, say, we might actually have very different things, even if it has the same name.

And so diseases aren't these discreet things, which means that the whole notion of the clinical trial starts to become, it starts to become outdated over time. What do you do with this huge regulatory structure in this industry that is adapted around this regulatory structure? How do you achieve an institutional transformation to something like, say, completely personalized medicine, designed all by AI?

And obviously these are, I'm talking in extremes here. Just to give, to give your audience a sense of like where things might be going. But it's really this question of there's just a lot of turbulence here. There's just a big transformation that has to happen, and it's not obvious to me that we have the kind of flex in the joints as a republic to continue doing that.

CHAKRABARTI: Okay. Let me take this even a little bit deeper. Because my conception of an ideal democracy is that it's not just the technological aspects, so let's stick with regulation. It's not just the technological aspects of how regulation is created and how, as you said, government being the mediating force there successfully oversees regulation.

To me, it's all predicated upon a representational democracy that ideally is supposed to, through representation, create sets of regulation that are for the broad public good, for the common good. And I wonder if that basic assumption of what government is supposed to do. Now, I know that there is a lot of disagreement over that, but I'm going to just offer that as a basis here.

I wonder if AI threatens that.

BALL: I think in some ways, yes. So one thing that's important to disentangle here is when I have written in the past about kind of the death of the Republic, I'm not quite saying that AI is the thing that is like uniformly responsible for that.

What I'm more saying is that our Republic has gone through multiple phases of life over time and there's, of course, the common argument that America has had multiple foundings in its history. And the New Deal is often described as a new founding, and I'm saying that, like, for a wide variety of reasons, some of which are technological, but some of which are due to a many other factors including culture and breakdown of politics and all sorts of things.

That era of the American founding seems to me to be coming to an end, and we're either coming to a new founding of the Republic, which is what I hope for, or we're coming to something more troubling, which would be something more like the end of the Republican form as we know it, and the entrance into something new, something perhaps, more of an imperial presidency, one might say.

CHAKRABARTI: Okay. So how then, do you think, but that AI really is accelerating this process as you've seen, you've written about your pessimism regarding American governance for quite some time, right? I'm just reading a quote here, say, from something that you wrote a while ago saying that, where is it here? Okay. That the American political elite, that their failings have been the prototype of American political elites. Here it is. Sorry, I'm half, I'm all here, Dean, for you. I'm just taking some time to catch up with myself. But that the prototype of the American political elite for both parties, for much of your life now you write here is, quote, the same as before, but now notably worse.

And that's been the theme of American politics for at least 20 years. So it seems to me though that the almost just the unimaginable advances just in the past five years that AI has made. If we already have an American political elite that isn't necessarily doing the best job at governance as it is, that this technology that barely anybody understands coming along and transforming entire sectors.

It doesn't give much, doesn't give one much confidence that the benefits of those tools will be maximized with a governing system like the one we have.

BALL: I think the biggest problem in my view, yes, I agree. I share that intuition and I think the biggest problem is that many, in many ways.

The great uses of AI in government are the opposite side of the coin from the terrifying uses of AI in government. To give you an example, one of the things that came up in the dispute between Anthropic and the Department of War, it was a contractual dispute that ended up becoming something much larger.

The article you just quoted from is about that contractual dispute. One of the things that came up is this notion of domestic mass surveillance. And AI systems being used by the government to engage in domestic mass surveillance. And the basic reality is that under current American law, domestic mass surveillance isn't quite illegal exactly. It is, but it's not. It's complicated. Is what you would say. And for all practical purposes, I think it is legal in a lot of ways for the government to do. The problem has been that as an economic reality, it's not really all that. It doesn't really pencil out.

The math doesn't pencil out to have a human intelligence analyst tracking me or you. Yeah. You know what we are doing. The thing that's different about AI --

CHAKRABARTI: Economic, I would say economic rationalism isn't necessarily always a driving force behind political decision making.

BALL: Yes, but it is a driving force behind budgets.

And so like the intelligence community is allocating limited amounts of human analysts who are paid lots of money and they're going to be tracking terrorists and people like that. But when all of a sudden AI makes many of the functions of a human intelligence analyst, or maybe even all of them when AI more or less totally automates that role.

Because what does a human intelligence analyst do? They basically sit at a computer, they look at information that comes in, they analyze that information and they output text that is analysis of that information, right? If you can automate that function, then suddenly it becomes possible to imagine every single American being veiled in this way.

Part II

CHAKRABARTI: Let's dig more deeply into the Anthropic/ Department of Defense Battle that happened, and just to give the very broad-brush strokes of the background here.

In the Biden administration, the government struck a deal, wrote up a contract with Anthropic to use Claude, their AI platform in various classified contexts. The Trump administration, I believe, reviewed that and approved it, but later on, secretary Pete Hegseth objected to certain things that Anthropic had insisted upon, which included, as you were talking about, not using Claude for the mass surveillance of Americans by the DOD or allowing autonomous decision-making by Claude for the use of lethal weapons.

Just quickly remind us how you see the flare up having had happened.

BALL:  Yeah. The timeline is a little bit interesting, but it's basically, and different parties have different stories on this. But as far as I can tell, basically in the fall of 2025, officials at the Department of War, starting with the undersecretary of war for research and engineering, a man named Emil Michael.

Came to the conclusion that the contractual limitations that Anthropic placed on the Department of War use were just unacceptable. They were, as you say, autonomous lethal weapons and domestic mass surveillance. And they sought to renegotiate the contract. It seems Anthropic engaged in those negotiations.

They did not reject the renegotiation out of hand. That went on for several months and then around January, February of this year. Things really came to a head and things started leaking to the press. It's unclear which side was leaking, but someone was leaking, probably both. And it got increasingly hostile.

And then eventually secretary of War, Pete Hegseth, said that unless Anthropic agrees to the terms of no, all lawful use of their models by the Department of War, no restrictions. In other words, they would declare Anthropic a supply chain risk and a supply chain risk is a regulatory designation, typically intended for foreign adversary technology.

And it would prohibit the use of Claude, not just by the Department of War, but also by any Department of War contractor.

CHAKRABARTI: Which is a lot of companies.

BALL: That is essentially most of the Fortune 500, if not literally all of it. Yes.

CHAKRABARTI: And you saw, you were one of many people who saw that as the government using its power to try and essentially destroy a private sector company to destroy Anthropic.

BALL: In procurement law circles, this designation is referred to as the death penalty.

CHAKRABARTI: Okay. And so the issue though then is that let me ask you, should the government, in deciding that it does not like contractual terms, once it's tried to renegotiate it and has done so unsuccessfully, what are the arguments against the government using its power to get the terms that it wants?

BALL: I think the arguments against the government can use its power to get the terms that it wants. First of all, I think the question is the government, should the government be able to abuse statutes, which are intended to be very harsh punishments for primarily Chinese companies, really. These companies like Huawei buying telecommunications equipment, for example, from Huawei and putting it on a military base would seem to be a very bad idea. That's a Chinese military linked company, there could be spyware in it, et cetera, et cetera. So I think it's more about the abuse of the power.

I think the government is free to say, we're not gonna give you contracts. And we're also we're gonna explain to the American public why we're not giving you contracts and why we think this is wrong. I think that's all fine. That's all well and good. I think it's the abuse of power in this unilateral way that I think is most problematic.

CHAKRABARTI: And as you wrote in, it was a widely distributed Substack of yours, it was titled Claude, that there's no law prohibiting a company from saying, you cannot use our technology to do X. But the Department of Defense was saying Anthropic limitations go beyond that. They're not just functional limitations.

They were saying Anthropic is preventing us from doing what the DOD wants to do. That it was essentially a policy limitation. And you disagree with that as well?

BALL: So I think that contractual vehicles are probably not the best place to have policy disagreements. So I'm sympathetic, in other words, to the argument that the Department of War makes, which is basically that this San Francisco AI company does not have the right to determine when autonomous lethal weapons are ready for prime time. That is very much a decision that the American people have vested in the President and that the President has delegated to officials at the Department of War.

This is like that complete concurrence with the Department of War about that. And so I think that Anthropic is free to have whatever terms it wants, but if I were personally advising Anthropic. If I had been advising them on that contract, I probably would've said to them like, Hey, this feels to me like it's more of a policy issue.

And so it sounds like the kind of thing you should be talking to Congress about and not so much trying to accomplish through the means of a contract.

CHAKRABARTI: But going back to the abuse of power issue, right? Which I think that's the one that connects to your bigger concern about watching the low degradation of our current American Republic is that, on the other side, the Pentagon has the option in an ostensibly free market system to not contract with Anthropic. Instead of threatening to kill Anthropic, they eventually did move to OpenAI and it was like, but why? So the issue is why exert that abuse of power when there were other options already available?

BALL: Yes. Yes. And I actually, one other thing about the contract though that is interesting and connects to the broader theme of the sort of problems with our republic is that our republic has a lot of different sort of pressure valves, right? There's like political impulses are supposed to be exercised through a wide variety of different mechanisms, one of which is Congress passing laws.

And when that pressure, when that outlet is gone, you start to see policy, try to like creep in through weird other, like the political impulses act through different pressure valves and all of a sudden, things that were not intended to accommodate big policy decisions, are being used in that way.

And so I think actually the contract is an interesting example of that. Where and the broader issue of making public policy through procurement law. This has been going on for decades to be clear, this has been going on for a very long time, but this issue of using these little bureaucratic tools of procurement to create massive public policy effects.

That's not healthy. It's not healthy. It's a bad outlet for doing that. We should be, we have outlets that are designed for having those broader policy conversations and legislation is one of them. And so the fact that Congress is broken in that way, this is an epiphenomenon of it that I think is interesting.

CHAKRABARTI: Let me read directly from what you wrote in this Substack post, because these, I think these are the lines that really drew a lot of people's attention. You wrote, quote: Essentially, the United States Secretary of War announced his intention to commit corporate murder. And it does not change the message sent to every investor in corporation in America, do business on our terms or we will end your business.

And then you go on and you say, this strikes at a core principle of the American Republic, one that has traditionally been especially dear to conservatives, and that is of private property. And let me skip ahead a little bit more, you say. This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk, but also in the sense that any piece of technology you use could be as well.

So talk to me about this from, as you said, from the conservative point of view of why this was so threatening.

BALL: Yes. So I think there's this basic idea that Anthropic, they have this intellectual property of theirs, right? And its Claude, and it's all the know-how that goes into training Claude and all.

And the principles by the way, part of that know-how is the principles that they want to bake into Claude, which are very important to Anthropic in particular, though they're important to the various principles of the AI systems, of the major AI companies. They're important to everyone in the industry, right?

But Anthropic in particular is known for how much it cares about this thing called alignment, which is the AI system aligned to a coherent set of values. And they put a lot of thought into that. This, to me, strikes me as just a quite core aspect of really both speech and property, right?

You can really look at it both ways, I think. And this idea that the government is going to come in and say, no, these are the values that you must use as a private actor is a little bit like the government coming in and saying, I don't think that book you wrote is well aligned with our values. And so you need to rewrite your book. And of course, if the government did that, a lot of people would scream bloody murder. And, by the way, if a Democratic government had done any of what we're talking about right now, Republicans would be completely up in arms.

I think the problem is not just that Republicans are doing it, it's this kind of, it's this kind of yo-yo effect where the Democrats will, the Republicans have now broken this norm. The Democrats will come in and they'll do it maybe even worse in some ways. And then we'll just go back and forth.

And in the meantime, you have all these private actors that are trying to decide how to do business in this country and they don't know what to do.

CHAKRABARTI: You even talk about though, how this was a moment where threats to civil liberties came into sharp relief in terms of the advances in AI. Because I do believe you said that how government uses AI will of course be of paramount. That was your word. Paramount importance. Because we don't necessarily want AI to be used for mass surveillance of Americans or for autonomous control of lethal weapons. We should try to build in the safeguards so that those things don't happen.

BALL: And to be clear, I'm actually perfectly fine with autonomous lethal weapons.

I think that's an inevitable part of the future. I think we want to make sure they work really well and they're extremely reliable. I don't think we're quite there yet, but I think that's an inevitable part of the near future. The domestic mass surveillance thing is really interesting though, because there are many things that you do want the government to do.

That I could frame to you as domestic master surveillance, right? So say for example, AI enabled, no one likes litter. We'll take the example of litter. What if there were drones, flying over a city and the city government, owned by the city government, and they could spot whenever anyone littered anything and they could immediately fly over and grab that piece of litter and there was constant environmental remediation going on.

That sounds like a cool public service. Maybe you don't like it, maybe you do. I don't know. But the point is that it sounds pro-social, but that is domestic mass surveillance by the government. It's the same thing. So it's like there are good uses of AI that involve the government having, let's say, situational awareness over a sort of the turf that it controls. You might also say like Golden Dome. Defending our airspace. That is a kind of domestic mass surveillance too. There's all sorts of things that you'll be able to infer using the sensors that Golden Dome is using to defend us from airborne threats. So it's not so much that we don't want the government to have these abilities, it's that we want the government to use the abilities for good and not evil. And that's the hard part.

CHAKRABARTI: Dean, I'm really so glad that we're able to have this conversation because how you're speaking now seems to be, to me more nuanced or at least less dimensionally impassioned than the Claude post led me to believe. Because you wrote some things that really, I was like, wow.

One of the nation's leading thinkers on AI believes the following. For example, you say with each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary and capricious. A gradual descent into madness. It is hard to know at what point ordered liberty itself simply evaporates and we fall into a purely tribal world.

BALL: And I do believe that.

CHAKRABARTI: Yeah. And you tie that back into what you think we should learn from the Pentagon Anthropic fight. Tell me more.

BALL: Yes, so I think, again, one of the points that I really stress to make in the Claude piece, and it's unfortunately, it's impossible to do, I think it's impossible to make a point that strongly and not be associated with like your criticism criticizing this administration in particular.

But I really didn't mean it just as a criticism of the Trump administration. This is a trend that has been going on my whole life. You could actually really make the case that, in fact, I would make the case that it picked up steam significantly with President Obama, in especially his second term when he said, he started to use the power of the pen, right? I'm going to creatively interpret existing laws and try to jam a bunch of my policy priorities through, because he can't work with Congress. Now, I think a Democrat could respond and say, that's because the Republicans in Congress wouldn't let Obama do anything.

And okay, fine. But that's a sign of a breakdown right there. This has been going on for a long time and what it ultimately amounts to is a quite thuggish and unpredictable state. We have in this country, we have examples of laws, of regulations that change from president to president.

That criminalize things and decriminalize things. So it's possible to commit a crime in this country under one president and then do the exact same thing and not be committing a crime. And Congress has not changed the law at all. Your representatives have not changed law; arbitrary bureaucratic power changed the law.

That is not how a republic should work at a very core level.

CHAKRABARTI: Okay. Let me ask you something. Why did you leave the Trump administration?

BALL: It's a good question. Ultimately, I came to the conclusion that it wasn't because of any animosity, to be very clear. That's one thing I should say up front.

I ultimately came to the conclusion that I would be better able to contribute. I'm not a government, a public administrator by training. I don't know. A lot of my background is in state and local policy. I'm not the one you, I wrote, I helped, I led the drafting of this AI strategy for the country, and I think I was well suited to do that task.

But the actual like pulling all the levers of power and working things through the bureaucracy is not my specialty. And I think you probably want someone with more experience in the federal government doing that. I came to the conclusion that I'm better off as a contributor to the public conversation.

Part III

CHAKRABARTI: Dean, I'd actually just like to take a step back a little bit and hear from you about what is your assessment right now of, not just in the administration, but congress as well, in terms of their capacity to really understand the revolution that's coming with AI and therefore government's capacity to do the right thing for the American people.

BALL: I'll step back even further and say that I am regarded as an expert on this topic, and I don't assess my own capacity on those things very highly. So what I mean is that I think we're standing at the beginning of a path that is like the industrial revolution, or something even bigger than that.

And so it's being, in the presence of early steam engines or the early locomotives, and then trying to imagine, the automobile and the suburbanization that would happen downstream of the automobile and electrification and all these other things, right? It's just very hard to know.

So I think all of us are operating without a net. And I think it is an unfortunate reality that Congress and the government lacks a lot of the technical chops that you really need to do a great job in the governance of this technology. I do think that there's more expertise and interest in this than you might think in the government.

Congress and the government lacks a lot of the technical chops that you really need to do a great job in the governance of this technology.

And also I think the American people themselves are wiser than people in D.C. tend to give them credit for. And so I think if you look at the polling on AI, the American public is equal parts excited, anxious and scared or confused.

Excited, anxious, and confused. I think that's about right. I actually think that's about right.

CHAKRABARTI: Yeah. It's interesting because you mentioned polling, I was thinking about confidence in government itself right now. And I think I saw a recent poll that said there are more people in America who believe in the flat earth theory than have confidence in Congress, for example.

BALL: And so you have this problem in America where the American public very clearly wants AI to be governed and regulated in various ways, and yet they actually don't trust the government to do it. That's the part, all the people who are pro, I'm regarded as like a regulation skeptic of AI, but, all the pro AI regulators will say, the American people want this. The part they don't tell you is the American people do not trust the government to do it. So what do you do? How do you solve that problem? This is actually where my technical work and my policy work comes in as opposed to my like public intellectual work.

And I am a believer that we are going to need to create new types of private, publicly overseen, but fundamentally private institutions that can assist in the governance of AI, especially the technical expertise, heavy parts of it.

CHAKRABARTI: So let me ask you though, we have a specific example we can lean on in terms of the Trump administration's AI action plan, right?

Which was released of July of last year. Just before you left the administration, there were some deadlines, if I could put it that way, or benchmarks that the action plan had laid out, that the administration said it was going to achieve. And I'm seeing reporting here that hasn't happened.

Do you want to talk a little bit about that?

BALL: Yeah, so the action plan is a sort of 90-point to-do list essentially is the way to think about it. It's also a strategy. It's a strategy for the whole country. And then in addition to that, it is a 90-point to-do list for the federal bureaucracy. The implementation timeline was thought, we were thinking about it as something that would be executed over a kind of 18-to-36-month timeline.

So we're still pretty early. As far as I can tell actually, big chunks of the action plan have been implemented quite well. There are still, there are some things that either the administration has changed strategically. For good reason. There are some things where they've changed strategically, and I don't necessarily agree, but it's whatever, these things happen in statecraft and then there are also sort of new things that the administration has done, such as the feud with Anthropic, which I think is really quite inconsistent with the spirit of the action plan.

CHAKRABARTI: Yeah. Okay. I see here that. And I'm sticking with the action plan because again, it's something that, you know, forward and backwards and it's a good sort of concrete way for us to measure these bigger questions about the government's ability to operate or to best use AI in this new world that we're living in.

So here's three, at least three provisions that apparently were due on March 11th. And this is in line with the action plan. And I should say the date comes from President Trump's executive order that he also issued last year. So the FTC was supposed to issue guidance on how consumer protection laws apply to AI models.

Commerce was supposed to review and publish an evaluation of state AI laws. Interesting conflicts there or disagreements there. And the FCC was tasked with considering whether to create a national AI reporting and transparency standard. But it seems like none of those things have been done yet, or at least not made public.

BALL: Yeah, so those things, I should say, those all do come from an executive order, which happened in December. So I was not involved in the drafting of that executive order. And I think, I would say conceptually that executive order is broadly consistent with the action plan, which talks about the problem of a state-by-state AI regulation patchwork.

But as to those specific things, whether they've been completed and just not made public, it's hard for me to say.

CHAKRABARTI: Okay. The reason why I bring it up is that yes, I understand that when we're dealing with revolutionary change, like we can't expect new regulations or laws even better, right?

Coming from Congress to appear overnight. I understand that. But again, just getting back to what we mutually agree on is the American public's distrust in government's ability to do things. Even when it seems like there's nothing to dissuade people from that distrust, when even like these early benchmarks that have been laid out by President Trump to his own executive. It's not even that he's telling Congress I'd like you to do X, Y, and Z. It's his own executive and they can't meet these initial deadlines. I guess what I'm saying is like people just don't have faith that the government knows what it's doing when it comes to AI.

BALL: Yeah, no, I think that's very true. And that people don't have faith. I think it's very true.

CHAKRABARTI: Are they right to not have faith? I wonder how you, as a former member of the administration, just looking back at what the administration has done over the past almost year.

How do you feel about it?

BALL: Administrations are not monoliths. So what I would say is that there are some parts of the federal government that are getting up to speed in a way that I think is quite good. And are making good judgements. And there are other parts of the federal government where I really question whether at a basic level there's just a strategic incoherence there.

And I've been critical of the areas where I think that's true and I've praised them in the areas where I think there is good technical and sort of policy judgment being exercised. So it it's uneven, but broadly speaking, I think the American people are not wrong and they're not looking just, the American writ large is not looking at executive order deadlines.

The American public is looking at the texture of governance in this country and saying, man, do I really trust these people to this whole thing, to govern an industrial revolution? And I think they're basically right to be quite skeptical of that.

CHAKRABARTI: Okay. Dean, I really want, I'm surprised that you're sounding more sanguine in this conversation than any of your writing has led me to believe.

I'll be frank about that. Because you write about how there's a revolution coming and you talked about the new founding of a new republic. You mentioned how like AI may collide with the fundamental ideas of what this nation is as laid out in the Constitution, that the idea that like our Democratic small D governance system relies on virtuous people making virtuous decisions and the AI could threaten all of that. And yet I hear you now saying, government's complicated. Some people are doing a good job, some people aren't doing a good job. People are desperate. The American people are desperate for someone in the know, like you, to give it to them straight.

What makes you so worried about what could happen with this new, the founding of a new republic here? What is really making you worried?

BALL: Two things can be true at once. It can be true that the government is uneven and complicated, and also that I'm quite worried about the long-term institutional trajectory of AI.

So what I would say is that we have very real challenges that we are going to face, and I think they might like very dramatically shake the institutional foundations of the country. On one level, it's not even really that conceptual of a conflict. It's much more just that AI will be a kind of everything everywhere, all at once sort of phenomenon. So there will be cyber crime massively accelerated and it will cause geopolitical turmoil, and it might cause economic turmoil. And the government will have to be managing all of these things, each of which we might question whether they can do it, and they'll be having to manage all of them all at the same time.

And then there are also more fundamental questions, which get to as an example, this notion, it's not so much the Constitution. I want to be clear, like the Constitution can be tweaked in various ways, but I think we very broadly, we got that order correct. The bigger issue is that all the laws we've passed since then in the 250 intervening years, we have given the president and in particular, an enormous amount of power. And the thing that made that all okay. People like me have been complaining about that for as long as this country's existed, right? But the thing that made it okay is that enforcing those laws, exercising that power, cost, money, it cost people's time and attention, which costs money.

The thing that AI makes different is that sort of conscientious attention that we would typically associate with a salaried human bureaucrat is now approaching the cost of zero. And so the problem is that you will be able to enforce all manner of laws, right? I don't mean just draconian surveillance of the American people, what if the EPA enforced every aspect of environmental law completely uniformly all the time? I guarantee you that would be very negative for many American businesses, even though it's just enforcing laws on the books. Because we've written very broad laws. And so that is the problem that we are encountering.

That's one of the problems we're encountering.

CHAKRABARTI: So what you're saying is just the sheer, the hurdles or the friction, the slowing down of process that comes with people, right? Having to be the ones who enforce laws, as that approach is zero, it opens the opportunity either for a highly functioning and efficient government or totalitarianism.

BALL: Yeah, because we don't, the laws are, no law was written with universal enforcement in mind. Every law is an assumption that's baked into the drafting of every law is we won't enforce this perfectly. And sometimes actually enforcement is thought to be by the drafter very imperfect, right?

That's very common in lawmaking. And so when you can do perfect enforcement, that is, the recipe for totalitarianism already exists in America and it has for a hundred years. The only thing that kept totalitarianism at bay was the economic reality of having to employ people to do it, the economic and social reality of that. AI changes that profoundly.

CHAKRABARTI: Yeah, I think it's very human. This is the dark side of my belief in human nature, that when power is so easily accrued and AI can do that, as you're saying, that it's very hard. Almost not impossible for individuals to say, no, I don't want that power. Amplify that to the presidency of the United States, whomever is in office and you get, as you said, basically an effectively a modern-day emperor. So what are the restraining mechanisms? That we have in place or need to put in place around that when this is a technological refounding, as you keep saying, of the republic.

BALL: Yes. I think there's a number.

This is where you mentioned that it seems like there's this discontinuity between like how I sometimes talk about. I wear a public intellectual hat and I wear a technocrat hat, and I do both at the same time sometimes. So I think basically we have to take small steps from within our current sort of framework and paradigm.

Because that's all we know. That's all we have. We can't reinvent everything from the top down all at once. That's not wise. So we have to take small steps from within what we know and kind of work to transition towards some sort of new configuration of the Republic. So I think one of the things that new configuration could look like is that, so AI means organizations are going to be able to do lots more stuff because they'll be able to scale their workforces, in dramatic ways.

That goes both ways in the context of the separation of powers. And it specifically, I think, has really interesting implications for Congress. Because all of a sudden, Congress is so staff constrained. The congressional offices are so small, they're overseeing this massive government.

What if we had a Congress, a Congress that was way more capable of oversight of the government because of not just because they've adopted ai, but because we actually institute technological mechanisms by which Congress's AI, in essence, can oversee the government's AI. We might think about and maybe, you can have different, we can bring back things like the legislative veto where, for example, Congress could, if the executive declares an emergency to try to exercise some abuse of power, Congress could say no. We are undeclaring that emergency. Right now, the Supreme Court has deemed that unconstitutional, but we could amend the Constitution to allow Congress to do that. Such that Congress can exercise oversight and we can create some new channels for political. What you want to do is create channels for political impulses to flow. And so I think there's all kinds of interesting mechanisms we can design there. And what we need to do right now I think is approach this problem in the spirit of experimentation.

Because there's much more that we don't know than what we know.

The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.

This program aired on April 27, 2026.

Headshot of Willis Ryder Arnold
Willis Ryder Arnold Producer, On Point

Willis Ryder Arnold is a producer at On Point.

More…
Headshot of Meghna Chakrabarti
Meghna Chakrabarti Host, On Point

Meghna Chakrabarti is the host of On Point.

More…

Support WBUR

Support WBUR

Listen Live