Skip to main content

Support WBUR

Who is Sam Altman?

45:25
Sam Altman, CEO of OpenAI, at Station F, during an event on the sidelines of the Artificial Intelligence Action Summit in Paris, Tuesday, Feb. 11, 2025. (AP Photo/Aurelien Morissard, Pool)
Sam Altman, CEO of OpenAI, at Station F, during an event on the sidelines of the Artificial Intelligence Action Summit in Paris, Tuesday, Feb. 11, 2025. (AP Photo/Aurelien Morissard, Pool)

Sam Altman has been called the face of Artificial Intelligence. To many, he remains an enigma. Wall Street Journal reporter Keach Hagey has the inside story on the rise of Sam Altman and his impact on our future.

Guests

Keach Hagey, reporter covering the intersection of media and technology at The Wall Street Journal. Author of "The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future."

Transcript

Part I

DEBORAH BECKER: There's a lot of interest in the executives leading technology companies. Some call them tech bros. Others describe them as a new tech aristocracy. Among them is the CEO of OpenAI, Sam Altman known primarily for the chatbot ChatGPT. Altman has not only worked to develop and improve artificial intelligence, he's also spent a lot of time thinking about the risks involved and how AI might be harnessed. Here's Altman describing where he thinks AI is headed.

SAM ALTMAN: The world that, you know, kids that are about to be born, will, the only world they will know is a world with AI in it.

And that'll be natural. And of course it's smarter than us. Of course it can do things we can't, but also who really cares? So I think it's only weird for us in this one transition time.

BECKER: ChatGPT was developed by Altman's Company OpenAI, and just five days after its public release in 2022. ChatGPT garnered 1 million users, making it one of the most successful product launches of our lifetime. As of March of this year, about a half a billion people use ChatGPT every week, with about 20 million paid subscribers, and that's according to Forbes.

Just three years ago, not many people knew who Sam Altman was. Now many of us have at least heard the name of this 40-year-old executive. Altman's meteoric rise to the top of tech mountain has been unusual and at times controversial.

He's had nasty public arguments with some of the other tech titans, and he's ruffled enough feathers to get fired as CEO of his own company and then quickly reinstated. Our guest today knows a lot about Sam Altman and the AI ecosystem. Keach Hagey is a reporter covering the intersection of media and technology at the Wall Street Journal.

Her new book is titled The Optimist: Sam Altman, OpenAI, and The Race To Invent The Future. Keach, welcome to On Point.

KEACH HAGEY: Great to be here, Deborah.

BECKER: So let's start with the title of your book and the subject of your book. Who, if you could give us sort of the headline, who is Sam Altman? How would you describe him and why optimist?

HAGEY: So Sam Altman is the essential Silicon Valley figure. Before he was a CEO of OpenAI, he was the president of Y Combinator, which is a startup accelerator that is really at the very heart of Silicon Valley culture. And he proved himself to be a very skilled fundraiser, an incredible salesman, a great storyteller, and a futurist who could convince other people that he could see the future.

And it was out of Y Combinator that OpenAI grew, and he was able to bring in investment and kind of create a company that challenged the big tech companies like Microsoft, which eventually became its partner and Google in this AI race.

BECKER: And really this was a network of folks that Altman got involved with.

And your book really talks about how important that network is, that network of Silicon Valley entrepreneurs.

HAGEY: Yes, Sam Altman has an incredible Rolodex of contacts that he's been building since he was really a teenager at his first startup. Even when he was working at his first startup, he was taking meetings with other startup founders.

This all really happened through the Y Combinator network. He would give advice and do favors for people and built this sort of enormous mass of power that he was then able to leverage and start OpenAI.

BECKER: Now you did say at first, Sam Altman, I just wanna get this on the table before we really dig in here.

At first, Sam Altman did not wanna talk with you for your book, but eventually he did agree. What were first his reasons for not wanting to speak with you, and why did he change his mind?

HAGEY: Yeah, Sam was not thrilled about the idea of this book project at all. Initially, he said that it was too soon.

That was one of the reasons. He's still a young man and he has big ambitions for OpenAI, as well as other investments, such as in things like nuclear fusion. And I think he really wanted to see those things come to pass so one could write about them. So he feels like the story's still very much in the middle, and he had trouble with the idea of it being about him.

Maybe a book about the company OpenAI, he would've been okay with, but he did not like the idea of a traditional biography with this, which this mostly is.

BECKER: And so then what made him come around?

HAGEY: As I wrote in the book, this is my second book, and my first book was about Sumner Redstone the media mogul, who was at the very end of his life, and he was in something close to a vegetative state the entire time I was writing the book.

So I'm pretty used to writing about figures without their consent.

BECKER: Basically. I don't need you, Sam. I'm gonna write it anyway.

HAGEY: Yeah. So I just kept making calls, for many months. And I think he finally came around.

BECKER: And were there certain things, did you have an agreement?

Were there certain things that were off limits or could you write whatever you chose?

HAGEY: No, this is an independent work of journalism. He knew that from the outset, didn't ask for his permission and just try to be an objective journalist.

BECKER: So let's talk about some of the things you're right about.

Let's start at the beginning. He grew up in the Midwest. He got his first computer at eight. Eight years old, went to an exclusive high school. What do you think are some of the things from his childhood that really shaped the person that he's become?

HAGEY: So Sam grew up pretty comfortable. He grew up in this very comfortable suburb of St. Louis called Clayton. And as you said, went to a prestigious private school and I think that gave him an enormous amount of confidence. He was always the smartest person in his class and very charismatic. Even in high school, he was drawn to tech. But one of his teachers that I spoke with said that, oh, Sam, don't go into tech.

You're too personable, because he was also interested in all these other things. So he always had a bit of a politician's ability with him.

BECKER: Which has served him well.

HAGEY: Absolutely. And one really key moment from his high school experience was he was gay, and he came out as a teenager, which was not easy in conservative Missouri in the late '90s.

And there was this moment where he had to stand before the student body after a sort of, what he perceived as like a homophobic event had happened, and kind of stood his ground and said, we will not tolerate this. And others perceived him as being very brave and a leader for doing that.

And I do think it galvanized his leadership abilities.

BECKER: And so where'd that confidence come from, do you think?

HAGEY:  I searched and searched for it, and some of it is just innate. His mom said that he was just born an adult, and that she could have dropped him in New York City at the age of 10 and he would've figured out how the city worked.

He was able to use the VCR at a very young age. These difficult things. So some of it is just innate. I do think a lot of it comes from, he was one of four siblings. He grew up in a loving family, in a safe neighborhood. And I do think that gave him a lot of confidence.

But some of it is just he has a very unique quality where he is less afraid of things than other people.

BECKER: Does that make him reckless?

HAGEY: Some say yes. He's talked about like he doesn't have this ability to feel fear in the same way that other others do. And his critics would certainly say that it does.

BECKER: He says he doesn't feel fear, but he certainly has expressed some pretty fearful concerns about the potential of AI if it's not properly harnessed. So there is some fear there.

HAGEY: Yes, he does have like a philosophical appreciation for the potential downsides of AI.

And he, early on, when OpenAI was created, he went around warning people that if even after ChatGPT came out, if this technology goes wrong, it could go very wrong. But when I'm talking about fear, I'm really talking about the investor's fear, the poker player's fear. He's able to take very big, risky bets that other people would be scared by.

One of his associates told me that Sam's the only person that if there's even just a 1% chance that something is gonna work. But if the upside is enormous, he will take that bet. And other people, I think wouldn't.

BECKER: Let's talk a little bit more about his background here.

Went to Stanford, but he didn't finish because he got some offers from tech companies, and he began his career instead. But Stanford was really important for him. Can you talk a little bit about that start for him?

HAGEY: Sam dreamed of studying computer science at Stanford his whole life.

This was his absolute goal. But he only got through two years of it. While he was at Stanford, though, he met his co-founders for this startup idea called Loopt, which was like a friend finder mapping app for the flip phone age. And it was based on this idea that cell phones were going to become location aware, through GPS technology and other things very soon.

And he just wanted to figure out something they could do to use this new technological ability. And the idea was good enough that they pitched it at a local business competition for the students. And someone from NEA one of the biggest and most vulnerable venture capital firms heard it and thought, this is a real business.

You should really do this. Ended up inviting him and introducing him to folks at Sprint. Which was kind of one of the first meetings he had that made him realize, oh, this is really going to be a business.

He was able to acquire venture capital backing as a sophomore in college and went to Y Combinator over that summer after sophomore year, which is a startup accelerator.

And came out the beginning of junior year and realized, I'm going to drop out of school, I'm going to do this thing for real.

BECKER: And that was really the beginning of him getting into this world where he could network and learn about other opportunities and really find mentors. Who do you think from this time in his life was his most significant mentor?

And what did he get from that experience?

HAGEY: Unquestionably, it was the Y Combinator founder Paul Graham, who was a sort of a guru, a philosopher, king of the tech world at that time. He'd been writing these essays online that served as a beacon for young, mostly guys who were interested in technology.

And he had this idea that, Hey, what if we just got a bunch of kids together, gave them a little bit of money, not too much for a summer, and instead of a summer internship we'll invest like venture capitalists. And at the end, kind of send them off into the world to try to raise real money. And Sam was in the inaugural class of Y Combinator batch as they call it, the inaugural batch. And there he met his people and Paul Graham was really, his ultimate person. Paul Graham describes the first meeting where they had a brief 25-minute interview and he thought, okay, this is what a young Bill Gates was like.

BECKER: But Loopt didn't ultimately do that. Was there a lesson also there for Sam Altman?

HAGEY: Absolutely. So in the end, Loopt was solving a problem that people didn't really have. And one aspect of it that many people working on it saw in retrospect was people thought it was creepy.

They didn't really want to be tracked and have people see their location as a blinking dot all the time. Maybe they wanted to know where their friends were, but they didn't really necessarily want their friends to know where they were. So I think there was sort of a basic misunderstanding of human behavior.

They were so enthralled by what the technology could do. They didn't stop to think about what humans really needed. So I think that's one key lesson.

Part II

BECKER: Now, Altman, we should say, had co-founded OpenAI with Elon Musk and initially as a nonprofit research lab. And the aim here was to create artificial general intelligence or AGI, and that apparently is AI that outperforms humans. Now, this is something that Sam Altman has been interested in for quite some time here.

He is speaking to Bloomberg in 2018.

ALTMAN: What does it mean to build something that is more capable than ourselves? What does that say about our humanity? What's that world going to look like? What's our place in that world? How is that going to be equitably shared? How do we make sure that it's not like a handful of people in San Francisco making decisions and reaping all the benefits?

Like I think we have an opportunity that comes along only every couple of centuries to redo the socioeconomic contract and how we include everybody in that and make everybody a winner. And how we don't destroy ourselves in the process is a huge question.

BECKER: And there are numerous concerns about the power of AI.

Gary Marcus is a cognitive scientist and professor emeritus at New York University. Here he is at a Senate hearing in 2023 talking about artificial intelligence, and Sam Altman at the time was sitting right next to him.

GARY MARCUS Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at a scale humanity has never seen before.

Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about data sets that AI companies use will have enormous, unseen influence.

Those who choose the data will make the rules shaping society in subtle but powerful ways.

BECKER: Keach Hagey, in your book about Sam Altman, and you've taken up some of these questions, this potential destabilizing force of artificial intelligence and also Altman's vision of this is going to be inclusive and improve the world.

How does Altman propose balancing these things?

HAGEY: Earlier on, he was really interested in universal basic income and actually funded some studies into how well it works. And in some essays, he suggested that this might be one way that we can make the world more inclusive after AGI destabilizes everything.

In more recent years, we haven't heard as much about that. And now Sam is more likely to say that the way that he broadly distributes the benefits of AI is by just making ChatGPT relatively low cost at $20 a month, or sometimes free. Which is a very different kind of equitable distribution.

As far as the more terrifying prospect of, will the robots take over and kill us? They have an AI safety division that has, it's still there and it's still a core part of what OpenAI does. But after his blip moment when he was nearly fired, you've seen some of those forces in retreat at the company.

BECKER: Let's talk about that blip moment. Why don't you remind us of that blip moment at OpenAI and then we will also get into the fight with Elon Musk, a very public, ugly fight with Elon Musk. But first, Altman was almost ousted by his mutiny at OpenAI. Tell us what happened.

HAGEY: So it was one of the craziest business stories I've ever covered in my career. Especially because I was writing a book about Sam Altman. I looked down at my phone and there's a headline that he'd been fired. And OpenAI has a very unique structure in that it is a nonprofit company that sits atop a for-profit company.

And the for-profit company has taken a huge investment from Microsoft and it's ballooning its valuation, every few months. But who's really in charge is this nonprofit board and it was a nonprofit board that suddenly, stealthily fired him one day in November of 2023. And it was really a result of a confluence of factors.

There had been a long struggle at the board over who was going to be on the board in sort of different factions. There was Ilya Sutskever, who was also a co-founder of the company and its chief scientist who was on the board. Had lost faith in Sam ... over some sort of management issues. And there were folks on the board also who had watched Sam, what they believe was be deceptive to them about some safety issues.

Some of them were small. They felt that Sam was being deliberately deceptive and trying to lie about whether something had gone through safety review. And while the stakes were small now, you could see as ChatGPT was getting smarter and smarter through, 3.5 and GPT-4 that the stakes were about to become very grave.

And through a number of meetings, they all got together and then were fed some new information from Mira Murati, who was the CTO at the time about Sam's management failings. And they met and while they were talking, they caught Sam in a lie about one of the board members.

Helen Toner had written a paper that appeared to criticize OpenAI's safety record compared to other companies. And Sam basically tried to get her fired for it off the board. They came to an agreement. But in the course of all this, there was a little game of like high school lunchroom telephone, where, you know, one of the board members said Sam said that this other person said that Helen should be off the board.

And that person hadn't said that. And they realized that he was lying.

And he since apologized for this lie and said it's true that he did not tell the truth about this, but because they caught him an alive lie while they were talking about whether he should really the right person to lead the company, they decided to fire him.

BECKER: But — but then they didn't.

HAGEY: They did. They did fire him.

So they did fire him.

BECKER: For five days, right?

HAGEY: Yeah, five days. Yeah.

BECKER: Okay. Yeah. But then he came back. Tell us how he came back.

HAGEY: Yeah, so the thing is that he still had a lot of power, even if not on paper, and all of the investors and the employees and Microsoft, they all had a lot of stake in him running this company.

Not least because there was a tender offer, which is a way that for the employees to basically be able to liquidate some of their shares and get money, that Sam was putting together. Because he's the master fundraiser. And that was clearly going to die if Sam was out. And after about a few days the employees of the company, basically mutinied and threatened to quit and go to Microsoft unless they brought Sam back. And so the board buckled, and Sam was replaced. Sam came back.

BECKER: Why do you think these employees weren't as concerned about these deceptions? Were they too small for them to worry about and they needed to focus on the bigger picture of the fundraising and the other things that Altman brought to the company?

Or what do you think was their thinking? Clearly, they were people having issues with him? So why was there that reversal, do you think?

HAGEY: They had no idea. At the time, I think a big part of the reason why the firing was reversed was that the board, when they fired him, they just said Sam was not consistently candid with the board, which is polite way of saying he lied to the board.

But they never said about what. And so for a really long time, no one really knew about what, and part of what, I tried to uncover in this book, was what did he lie about? And so the people in the company really didn't know what the board's reasons were. They just knew that the board wasn't saying, and they wouldn't answer their questions about why.

And that was a huge part of the pushback.

BECKER: Another person who questioned Sam Altman's credibility is Elon Musk. And there's some big public nasty questioning that's gone on now. These two co-founded OpenAI together. Musk gave a lot of money for this company to go forward.

In 2023, we heard Elon Musk talking about his new feelings about Sam Altman. And he was particularly concerned about what was happening at OpenAI as it attempted to go for-profit or partly for-profit, and non-profit. This is what he said to CNBC in 2023.

ELON MUSK: It does seem weird that something can be a non-profit, open source and somehow transform itself into a for-profit, closed source.

This would be like, let's say you funded an organization to save the Amazon Rainforest and instead they became a lumber company and chopped down the forest and sold it for money.

BECKER: So I wonder, Keach, what do you think? Can you tell us a little bit about this relationship with Elon Musk and then the falling out?

HAGEY: Yeah. Sam and Elon got together over a series of dinners in 2015. So the spring that OpenAI, the year OpenAI was created. And it was really the fear of Google that brought them together. The fear that Google was developing AI and that AI would be locked inside this for-profit corporation that was already too powerful.

And that humanity would have no control really of how AI would go, and the profit motive would be driving it. And in fact, their first collaboration was on a letter to the Obama administration saying, please regulate AI, which is a funny place to start. But in the course of these talks, that you can see now in the emails that have come out in the lawsuit, that Sam suggests, Hey, why don't we start our own lab?

It could be non-profit. I'll throw in some Y Combinator stuff. You can bring some stuff. And we'll be a counterweight to Google. And this nonprofit-ness was to try to push back against their fear of the profit motive. And that was all fine until they did some breakthroughs technologically that made it clear they were going to need a lot more money than they thought.

Because what really worked, the kind of AI that ended up really working required huge amounts of data and huge amounts of computing power, and they really weren't going to be able to just raise that from donations from people. And there was a big power struggle at the company around this time.

This is like 2017, 2018. And at one point Elon Musk said, even he suggests, okay, maybe it needs to be for profit and become part of Tesla. And he wanted to basically control it. He wanted to be the biggest shareholder. He wanted to be the leader of it. And the other co-founders didn't want that.

Not just Sam, but also Ilya Sutskever. And Greg Brockman, another key co-founder from the beginning. And Sam won the power struggle, and Elon basically took his ball and went home. And that was in 2018. He like left the board. He stopped, he promised $1 billion, but he actually over the years did about $50 million.

He stopped funding them and it was pretty quiet for many years until after ChatGPT came out. And then it was the success of ChatGPT then has prompted this public questioning of, what are you talking about? Why, is it really okay that you turn this thing into essentially a for-profit enterprise against our initial mission?

And around that same time, Elon Musk, of course, founded his own AI company, X.AI. That's a competitor to OpenAI. And it was in the cauldron of all that then that the lawsuit started to fly.

BECKER: So what would Altman say though to, I get it. He needed more money than he thought he was going to, so perhaps he did have to partner with a for-profit entity to be able to go forward. But what about this idea that is there some sort of an inherent conflict to go with a for-profit source for what was supposed to be this research lab of developing a technology that's going to be world changing and very lucrative, right?

HAGEY: Yeah, Sam acknowledges that it is like an awkward transition and their explanation is exactly what you said. We didn't see, we didn't foresee that we'd need this much money. The only way to get this much money is to go for-profit. So this is just what we had to do in order to actually bring forth AGI, which is our goal.

There have been some accusations in the lawsuit that the nonprofit idea was some kind of a ruse. And just from my reporting from the book I don't think it was a ruse. I think it was something that in retrospect, they regret. Because they haven't been able to get out of it, they've now been trying to basically wiggle out of it and have been blocked so far by the attorneys general of California and Delaware and other entities.

I think it was just a weird idea that in retrospect, they regret.

BECKER: I wonder, Keach, could you say a little bit more about this idea of having a lab? It sounds like an interesting idea to explore where AI is going. Is there anyone else doing this type of thing?

This sort of really exploring all of the issues with AI and using resources devoted to that? Or was that something that was pretty exclusive to OpenAI?

HAGEY: It was happening inside Google DeepMind. DeepMind was an AI startup that Google purchased shortly after it had this extraordinary demo of playing an old Atari game.

You could watch the bot teach itself how to play Atari without being taught. So it was the DeepMind knowledge inside Google that OpenAI was founded to counter, and that's still going on. Google's still a major player. Of course you have Elon's X.AI and you have, there was a faction of folks inside OpenAI who broke away to create Anthropic, which actually has a very similar business model to OpenAI.

Although they've branded themselves as safer and more careful. So there are quite a few players in this space.

BECKER: A lot of competition, a lot of money. Really world changing technology here. And as you mentioned, Sam Altman has had some difficulties in terms of deceptions, perhaps some recklessness.

Do you think he can be trusted?

HAGEY: I think that there are people who have dealt with him who feel betrayed by him, and these are personal relationships. Broadly speaking, in my personal experience, I feel like he's been a trustworthy person that I, in my personal interaction with him, but broadly speaking.

The question, I think, really is, can capitalism be trusted with this, and right now OpenAI remains a private company, but down the road, they've raised so much money, there has to be some exit, right? If you raise money from investors, you basically have to go public at some point, and I think it's a huge question.

What would a public company look like with so much power over the future of AI, if OpenAI does indeed remain at the forefront as it is today? Because those quarterly earnings calls and incentives are pretty challenging for this very, very long-term technology.

BECKER: How would you describe his overall vision, Sam Altman's overall vision right now?

HAGEY: He has a very broad vision that extends beyond just developing AI. He's described a world in which AI flows like electricity or like water, and a lot of his emphasis right now is about building out the infrastructure to make this possible because he's been quite candid that really AI is not going to work or take over until it's very cheap.

And right now, it's not cheap. Like none of these AI companies make money, right? They're all losing tons of money because it's extremely expensive to do what they're doing. And he's been really working on the world stage with President Trump, and he's been doing a deal in Abu Dhabi.

To invest in huge data centers and chips to basically drive down the cost of AI so that it can be basically like water or electricity.

Part III

BECKER: What do you think is his reputation in the tech world? How do other key leaders view him?

HAGEY: They see him as a fixer, as someone who can, you can reach out to, who will solve your problem or do a favor for you or help you get an investor. Everyone has their little story of how Sam has helped them out, or in some cases gone to war against them and pulled strings and made things difficult.

He often was in that role as the president of Y Combinator, he had to be a cop, as he said it, in the world between the venture capitalists and the startup founders. So he has personal dealings with many people. He responds to texts almost immediately. He's very open, like many people have his cell phone number, he has many relationships across the industry and people do fear him a little bit or sometimes a lot, because he has this ability to make their lives difficult if he wanted to.

BECKER: Does he believe that the potential of AI is, can be very frightening if it's not regulated properly. How does he describe that? There's been a lot of talk about that.

HAGEY: Yes. He has even testified before Congress and said that if this technology goes wrong, it could go very wrong. Although we've seen his talking about that pivot a little bit in recent years.

So back then, right after ChatGPT came out, when I think the world was experiencing this sort of broad, collective belief that maybe AGI could actually be something that is imminent. He very much was asking for there to be more regulation, asking for there to be something like an IAEA of AI, some global entity that would police and patrol and make sure that it doesn't do unsafe things.

More recently, you've seen him say that we don't really need regulation, that the industry can self-regulate. And now that there's more money at stake, it's understandable, maybe why he would say that. So his message has changed a little bit.

BECKER: Yeah. We have a clip of him to speaking in May to a Senate commerce committee hearing about AI.

And it's, I guess, I would call it moderate. Here's what he said. Let's listen.

ALTMAN: To continue that leadership position and the influence that comes with that, all of the incredible benefits of the world using American technology products and services. The things that my colleagues have spoken about here, the need to win in infrastructure, sensible regulation that does not slow us down.

The sort of spirit of innovation and entrepreneurship that I think is a uniquely American thing in the world. None of this is rocket science. We just need to keep doing the things that have worked for so long and not make a silly mistake.

BECKER: Not make a silly mistake. And none of this is rocket science.

It doesn't sound as if he's very worried. Would you agree?

HAGEY: Yeah, I do think that after his firing the talk about how scary AI might be one day really went away.

And in part, that's because the faction that came after him was tied to this philosophy called effective altruism that has made the fear of existential risk from AI the centerpiece of its work.

Or, one of them and I think he agreed with them many years ago. He was friendly with them. He read books that overlap with ideas in the early years of OpenAI, and after he got fired you just don't hear that much about that anymore. And a lot of the EA related people left the company.

So the idea of it, because we've heard about effective altruism from others in the tech world, the crypto world, this has been a philosophy in this world. Just explain it a little bit more so people understand what it is.

HAGEY: Yeah, so effective altruism is basically like a data-driven cousin of utilitarianism that tries to use rationality and data and logic to decide how to do the best, the most amount of good and be the most ethical but be dispassionate about what that is.

Very famously, they have this whole idea of earning to give, meaning that it makes more sense for a young, idealistic person to go join a hedge fund and donate money to charities rather than go be a doctor in Africa and try to save starving people. Because you could just save more lives, in a spreadsheet, basically, with your hedge fund money.

BECKER: That's, and of course, can I just say, is that justification for being greedy?

Sorry.

HAGEY: Some would say that, and of course the most famous person who did this was Sam Bankman-Fried. And the whole FTX company was built by these EAs for this purpose. And that all crumbled in spectacular fashion and I think really tarnished the brand of EA.

So it's fascinating. You'll rarely find someone that will. Say, oh yes, I'm an EA. They'll say, oh, I believe in EA ideas, or, yes, they have a lot of good ideas, but I'm not one I think that our friend SBF has a lot to do with that. But they're still incredibly influential and there is a ton of money behind it.

All these nonprofit organizations and just tens of millions of dollars, hundreds of millions of dollars that are funding a whole network of entities that sort of push forward this idea. And one of the ideas in EA is that it is our moral responsibility to save lives of people who have not yet been born by making sure that AI does not wipe us out.

BECKER: And this was a philosophy of Sam Altman, but he seems to have toned it down after some bitter experiences with the folks that he worked with.

HAGEY: Yeah. I would never describe him as an EA, but he would, you could hear him doing some of the EA dog whistles in his blog posts from 10 years ago.

Because interestingly, the people who are most interested in AI were often the people who were most concerned about its existential risks in the early days. So there was this book by a Swedish philosopher, Nick Bostrom, called Super Intelligence, that was published in 2014 that both Elon Musk and Sam Altman read and posted about.

And this was the intellectual basis that OpenAI grew out of. And in the book, he talks about the possibility of AGI, but also the real risks that it poses to humanity. So both the enormous upside and the enormous downside.

BECKER: So let's talk about political power here. It's my understanding that Sam Altman was at President Trump's inauguration back in January.

Not right up front with some of the other tech leaders, but there, nonetheless. You mentioned some of the deals he's making with President Trump. Of course, he's had a past relationship with Elon Musk. Does he have political ambitions and how does the tech power translate into political power for Sam Altman?

HAGEY: Yes, he does have political ambitions, and he's had them his whole life. He explored running for Governor of California and he even talked to some friends about wanting to run for president around that time. And he's always been very politically engaged. He has his own political platform that he put online when he was still at Y Combinator.

And I think it's interesting to see how OpenAI has actually gotten him into the room where it happened in a way that his more direct political efforts did not. After ChatGPT was launched, he went on this world tour shaking hands and taking photos with presidents and prime ministers all around the world.

And he could not have been more in the room where it happened. And today we're seeing him through these deals with President Trump, even though, historically he has not been a pro-Trump person at all, he was very critical of Trump around the time of his first election. But on the first full day of the Trump presidency there was Sam Altman at the podium with President Trump, announcing a $500 [billion], a humongous Stargate AI infrastructure plan that would, again, bring this idea of AI running like electricity or water to fruition.

BECKER: And so then how are the conversations about regulating AI affected here and what's happening in terms of Sam Altman's role in that area?

HAGEY: It's been really fascinating to see this, OpenAI just recently did a deal to basically bring AI infrastructure to Abu Dhabi. And during the Biden administration, that was something that Sam had been trying to push for, and the Biden administration just thought it wasn't safe to bring the most advanced kind of chips to the Middle East because of the historic relationships with China.

There was basically a fear that the Chinese would get these chips if they were brought there and the Trump administration, like just green lit it all. So we see just a more relaxed approach to chips. We also see in the Big Beautiful Bill. There's this provision from the House side of it to put a 10-year moratorium on states doing AI regulation, which is an extraordinary giveaway to the tech industry.

I've really never quite seen anything like it. We'll see if it actually ends up making it through the Senate. But that's something that would've been unthinkable in the previous administration.

BECKER: I hadn't heard that before. Didn't realize it was in the bill. So I guess what do you think is next for him?

Do you think that he'll continue along this same path working to make sure that OpenAI gets a big slice of the AI pie here as much as it can and that he's really a big force.

HAGEY: Yes, I do think that we will see more and more connection between open OpenAI and government.

They just announced their first Defense department contract, which is a pretty wild thing for a Silicon Valley company of that young to do. And more and more this is gonna be a government story, because this is really about infrastructure. And it's so expensive that at some level there needs to be government level policy behind it.

So I think he'll be spending more and more time doing that kind of work, and he's already seen extraordinary success in matching his deal making skills with the sort of deal making ethos of President Trump.

BECKER: And of course, this also goes back to his background in terms of a public private partnership, right?

This is what his father did, and government needs to be involved in things that are going to be this transformative, right?

HAGEY: Yeah. I really do see the shadow of his late father in this work. His late father was ultimately a real estate developer, but worked in affordable housing for many years and was constantly pioneering these ways for the private and public sectors to work together.

Both, as an ingenious banking mind and as a really idealistic person. So I do, I feel that has imprinted on Sam and he's always trying to cook up some new way for the public and private to work together. And he has wanted from almost the beginning of OpenAI for the government to be the backer.

They went to the government in the early years when they were casting about for more backers and asked the government to back them. At that point, the government said no, they didn't really have technology to show for it. So they didn't, he doesn't blame the government for it, but I do think that's the long-term vision here.

BECKER: And what's the lesson, do you think, the lesson of Sam Altman, the lesson of OpenAI, or where, what does this say about where we are in terms of a tech world, a tech aristocracy, as some folks say, because, reading your book, it's very much, it seems as if we're taught, you gave us a look inside this world of very privileged, smart, young, ambitious men who are really running things. And it's a pretty exclusive club.

HAGEY: Yeah. I think one lesson I took away from this book is that the AI era yes, will be defined by brilliant AI scientists. But Sam is not an AI scientist. He is a money guy. He's a fundraiser.

He's an investor, he's a venture capitalist. He's a storyteller and a salesman. And the form of AI that has proven to be the breakthrough of our time is one that requires enormous piles of money, and he is the man for that moment.

BECKER: But is it, does it run the risk of really benefiting a few and not really benefiting all, as Sam Altman has said, is it really going to, what can be done to make sure that it's used properly.

HAGEY: I think it's an excellent question. And I am personally troubled by how the talk of sharing it equitably has fallen away as the truth of the technology has emerged. I think it really does threaten to concentrate wealth even more than it already has. And right now, while even everyone from the Catholic Church to governments have rung their hands about the labor implications. I don't see any signs of us having the tools to stop it from having really disruptive effects on the labor market.

BECKER: Even though we have all these deals with government and we're working with government, it just, it seems like it would be very difficult to try to reign this in.

HAGEY: Absolutely. I think that is part of the logic of this. OpenAI was founded with this idea that they didn't want to create an arms race, an AI arms race, but when they released ChatGPT, that's exactly what they did. And they forced all the other companies that were holding back.

Google had this technology, but it did not think it was a good idea to release it, to step forward and release it. And now we are in a world where they're all furiously competing with each other. We just saw Meta try to poach all the other companies talent in AI recently.

And the numbers there are just extraordinary, right? There's an absolute arms race both for talent and for chips and all of these things. Those pressures, I feel like they're more overwhelming than any kind of breaks about labor concerns or environmental concerns that could be placed on it.

BECKER: Do you know if Sam Altman's read your book?

HAGEY: He told me he wasn't gonna read it. And I understand that, right? It's hard to watch clips of TV things that I've been on. So yeah, I expect he won't.

BECKER: So no reaction from the Sam Altman camp about your book just yet?

HAGEY: He did tweet early on that he, and he publicly told the world that he did give, he participated in this book, and it was one of two books that he participated in. He gave it his blessing in that way.

The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.

This program aired on June 25, 2025.

Headshot of Jonathan Chang
Jonathan Chang Producer/Director, On Point

Jonathan was a producer/director at On Point.

More…
Headshot of Deborah Becker
Deborah Becker Host/Reporter

Deborah Becker is a senior correspondent and host at WBUR. Her reporting focuses on mental health, criminal justice and education.

More…

Support WBUR

Support WBUR

Listen Live