Support WBUR
Who are the Zizians?

A cult-like group referred to as the Zizians is linked to a string of violent deaths across the U.S. It has its roots in Silicon Valley – and a movement called Rationalism. Who are these groups and what do they believe?
Guests
Max Read, author of the Substack newsletter “Read Max."
Also Featured
Sonia Joseph, AI researcher who spent time with Rationalists in Silicon Valley and Cambridge, Massachusetts.
Transcript
Part I
DEBORAH BECKER: A cult-like group known as the Zizians is believed to be behind a string of violent deaths across the United States. On New Year's Eve in 2022, Rita and Richard Zajko were shocked and killed in their home in suburban Philadelphia.
According to court documents, a Ring camera captured audio of what sounded like shouting of, Mom! Then, Oh my God. The couple's daughter Michelle was questioned in the homicide, but never charged.
BECKER: Michelle is linked to the Zizians. She was later arrested with the founder of the group. Earlier in November of 2022 in California, a landlord named Curtis Lind was stabbed in the chest with a sword.
His friend, Patrick McMillan, told authorities that Lind was stabbed in a dispute over unpaid rent.
NEWS BRIEF: McMillan says his 80-year-old landlord had been brutally attacked by other tenants of the Vallejo property, who Lind was in the process of evicting because they hadn't paid rent in years.
BECKER: Now, Lind survived that attack, but he was stabbed again in January of this year, this time to death.
Also, in January, a border patrol agent in Vermont was killed in a shootout during a traffic stop involving two people.
NEWS BRIEF: Authorities say the agent was killed in the line of duty yesterday on Interstate 91 in Coventry. As you see here, it's about 20 miles south of the Canadian border. The FBI says, one suspect in the shooting was also killed and a second suspect, a U.S. citizen, was injured and taken into custody.
BECKER: The people allegedly involved in all of this violence have ties to a transgender woman blogger who goes by the name of Ziz. The group appears to have been nicknamed the Zizian by an anonymous blogger. The Zizians have roots in Silicon Valley AI research and a community known as the Rationalists. So who are the Zizians?
What do they believe, and how are they connected to rationalists in the world of AI research? Max Read joins us now to answer some of those questions. He's a journalist and author of the Substack Newsletter, Read Max. Max. Welcome back to On Point.
MAX READ: Hi, Deborah. How are you?
BECKER: I'm okay. Let's start with the Zizan, the group and we should say, they may not call themselves that and the group's founder known as Ziz.
Tell us a little more.
READ: Ziz comes from Alaska. She went to school at Fairbanks and moved to the Bay Area in 2016, and she wanted to find a job in the tech industry. And become involved in what's known as AI safety or AI alignment, which at the time, we've probably all heard these terms more recently because of how big the AI industry has become, but at the time it was a much more sort of niche concern.
The main idea of which was that as we're creating more and more powerful AI systems, we might one day create a system so powerful, so intelligent, so conscious that we have an obligation to ensure that it's aligned with human value, so to speak. And this was at the time and still is a main concern of what's known as the rationalist movement, what tends to call itself the rationalist movement.
And she became involved with a bunch of people who were interested in similar ideas. She would go to lectures, attend workshops, talk to people. At some point, she splits away from the rationalists. Essentially, she starts to believe that they're not taking seriously enough the sort of AI alignment problem, that there are PR issues involved with some of the people leading the movement.
And she first gets on a lot of people's radar when she protests a gathering of rationalists with a bunch of her friends who we would call the Zizians, though I don't think they would call themselves the Zizians. They're all wearing Guy Fawkes masks in the manner of Anonymous and handing out flyers instructing people to spurn the rationalist leadership.
They get arrested. They post bail. This is just before the pandemic. So the courts are moving extremely slowly and they disappear, so to speak. There's not a lot of records about what's happening for the next few years. Then we get to this altercation in Vallejo where the 80-year-old landlord who owned a kind of a piece of property where he was renting out space for people to park RVs.
A lot of artists. And programmers had finally confronted a group of people who were living on this property who hadn't paid rent in a long years. And they end up stabbing him, apparently from behind with a Samurai sword. He shoots one of the people, a woman named Emma Borhanian, who dies. Another one, Somni.
The person who actually, who goes by the name Somni, who actually stabbed Lind, survived. As it turns out, Ziz, who nobody has really heard of for a while and who had previously, whose lawyer had previously said might in fact be dead, was still alive and still present at the scene. The cops for whatever reason, don't actually arrest Ziz.
They drop her off at the hospital, because she says she's in pain and she disappears again, only to pop up again a few weeks later in Pennsylvania where she is arrested for obstruction of justice and disorderly conduct and connection with the homicides of the Zajkos. Whose daughter Michelle, who went by Plum online, is a friend of Ziz.
Ziz gets released again, is largely unheard of again. Then, as we've been hearing in January, Carl Lind is stabbed and killed by a data scientist named Maximilian Snyder or allegedly by a data scientist named Maximilian Snyder, who as it turns out, had previously applied for a marriage license with a woman named Teresa Youngblut, who happened to be one of the people involved in this shootout in Vermont.
So at this point, there's enough sort of evidence of Ziz's involvement in a number of these crimes for her to be arrested, at which point she is and remains in jail as of this moment.
BECKER: You need a chart here to try to keep track.
READ: It's very much a cork board and red thread case.
BECKER: Trying to keep track of everybody. But I guess what is really the big or main theme here connected to these killings? Like why, rationalists, if they are in fact supposed to use scientific thought, right? To improve the world and to do good. How do we go from that to Ziz and who are clearly using violence and how is that all interconnected?
Do you know?
READ: Yeah, this is a good and difficult question. It seems worth noting that the rationalist movement, so to speak, what is called rationalism in this context and in the Bay Area and around Silicon Valley is maybe not identical to the rationalism of, say Renee Descartes or, the philosophers who might have called themselves rationalists.
In some ways a sort of self-help movement. And in some ways a kind of gathering place for people with particular interest in AI and other kinds of big, long-term thinking. And a characteristic feature isn't simply using reason to live a better life or to figure out what a good politics might be, but to unburden yourself of the conventional wisdom or morality that is irrational and therefore holding you back from understanding the sort of the true best in life. And I think that means in practice, pursuing these very abstract, philosophical games and experiments, and then trying to live your life in concert with the conclusions that you draw from those.
So maybe a less difficult version of this or a less fraught version of this is what eventually came to be called Effective altruism. Which is the sort of philanthropic idea that the main goal, if you're giving money away, if you're donating to charity, is to save as many lives as possible.
And if you crunch the numbers and look at all the different ways that you. It turns out that buying mosquito netting to prevent malaria in Sub-Saharan Africa is in fact dollar for dollar the best way to save lives.
BECKER: But I'm still, we just, I don't mean to interrupt you because we will get into this, but I still am trying to figure out what kind of extremes the Zizians have gone to, this sort of subsect of rationalism first.
And why, what would've led them to do that?
READ: The answer to this is essentially veganism. There's three or four different ways we can do it.
BECKER: Veganism?
READ: Yeah, I mean it think about how strongly a person might come to believe in the crime that is factory farming or killing animals if you believe that they are sentient and can feel pain.
And if you take seriously this idea, if you take it so seriously that you rearrange your life around it, you might begin to believe that you're justified in killing people in order to do, or that any lives you take in pursuit of your goal are collateral damage at best, in what is ultimately a righteous cause.
BECKER: I think it's a big jump from vegan to killing people. I don't know.
READ: I mean, I wouldn't want to defend the Zizians here.
BECKER: Okay. Okay. So as you said Ziz took these ideas to the extreme anyway, or at least the way defended it among members of the Zizian group. But I wonder were there other things that suggest perhaps that this went way beyond AI, that these ideas and the philosophies that she was espousing really were more than artificial intelligence or the fate of humanity. It was really about the nature of human beings, right?
READ: Yeah. So Ziz had this idea that, we want to preface all this. This is all happening on blogs. Like Ziz is writing these blogs that espouse these long philosophies that I think you or I, if we read them, would say, this sounds crazy.
And I'm about to say it and I suspect you will think to yourself, this sounds crazy, but Ziz's idea is that everybody has there are two separate hemispheres in your brain, and these hemispheres contain different persons or personalities, almost like dissociative identity disorder. And these hemispheres can be debunked and given independent awareness so that they can each have their own sort of identity and person personage. And you could do this by taking hallucinogenic drugs. You could do it by experimenting with sleep deprivation.
And Ziz has this kind of moral hierarchy of people, where some people are good only in one brain, and by good, she means recognize the personhood of animals. Some people are in fact good in both brains, and it's extremely rare to be good in both brains. And Ziz is one of very few who is good in both brains.
BECKER: Of course. So yeah, as you said, Ziz is in prison. Is it still a group? Is it still a force? Are the Zizians still with us?
READ: As far as we know, not really. This was always a relatively small group of people who were ever involved and many of them are now frankly dead or in prison. One of the interesting things about trying to track this story is that for a group whose most of whose activity was online, the members.
And Ziz herself have managed to be quite, managed to keep quite low profiles. So it's actually a little bit hard to know exactly where every single member is at any given moment, what they're up to, what they're doing. But it doesn't seem like there has been a replacement leader, let's say, or that other members of Zizian group have decided to carry on with more criminal activity in the wake of Ziz's imprisonment.
Part II
BECKER: So we wanna take a minute now to take a closer look at who the rationalists are. One of the leading thinkers in rationalism is Eliezer Yudkowsky, who's known for his warnings about the dangers of AI on the Robinson Erhardt podcast. Last month, Yudkowsky explained that he's worried that AI will kill all humans.
ELIEZER YUDKOWSKY: AI companies keep pushing and pushing on their AIs to get smarter and smarter. They get to something eventually that is smarter than us that can kill us. That is motivated to kill us, not because it inherently wants us dead, but because it's best universe where it gets the most of what it wants, all the atoms are being used for things that are not running humans.
BECKER: In fact, Yudkowsky is now so concerned about super intelligent AI making humans extinct, he says AI shouldn't be built. At all. He recommends authorities take steps to restrict what are known as GPUs, graphics processing units, a technology that's used in training AI.
YUDKOWSKY: Have an international clamp down on the GPUs, not in any one country.
This is everyone's problem. The basic description I would give to the current scenario is if anyone builds it, everyone dies. You need to not build it, you're not gonna solve the alignment problem in the next couple years.
BECKER: So Max Read, before the break, you were talking about rationalists and how the Zizians were really a subsect of rationalists.
Can you explain how this idea of an almost murderous artificial intelligence fits in with rationalist thought?
READ: Yeah, I think this is the most prominent of the philosophical experiments that rationalists like to run with themselves, and the experiment goes something like, AI is progressing.
It will eventually progress into a super intelligence. If that super intelligence doesn't properly share human values, it could accidentally or on purpose kill us all and destroy the entire world. And I recognize personally that there are a number of leaps of logic in that step-by-step description of what's happening.
And I think that rationals would insist that I'm oversimplifying it. And they're right to some extent. But that is the basic direction there. And because the scale of the Armageddon being imagined is so huge. The quest to align, to ensure that AI is safe, or as Yudkowsky now believes to not build AI at all, crowds out any other concern or any other idea of what needs to be done in the world.
BECKER: So how many rationalists are there? I've seen some suggestions that there are hundreds of thousands of people who are involved in this kind of thinking and considering the implications, and we should say not just of artificial intelligence, but other problems of the world as well.
READ: Yeah, hundreds of thousands is not a crazy number to put on it if you include the broad rationalist in the broadest sense.
And I think there's a lot of people who are strongly influenced by rationalist thinking or by the rationalist movement, who absolutely don't agree with the kind of AI apocalypse scenario that Yudkowsky likes to put forward. And rationalism is not, it's not a church, it's not a membership organization.
It's a few websites and a few nonprofits that people gather around and there's meetups in cities all over the place. I think the sort of real hardcore of rationalism, which is mostly concentrated in the Bay Area, that attends the workshops and goes to the seminars and is deeply involved in debates on these websites, is much smaller than hundreds of thousands.
I would cap it it's a couple thousand people at absolute most and probably fewer.
BECKER: And yet we do know that big names in the AI community. Peter Thiel, Elon Musk, right? They've hired from within the rationalist community. They've spoken at rationalist events and done things like this.
Is rationalism almost a look at the psyche of Silicon Valley, or was it, could you call it that?
READ: Yeah, I think so. One of the, maybe the most prominent example of the influence of rationalism is that OpenAI was started as a nonprofit. Intended to build an artificial intelligence that was aligned with human values, not as an explicitly rationalist project, but certainly as one in line with rationalist values.
And I think the story of OpenAI as it's progressed from this nonprofit to Sam Altman now trying to turn it into a profit making company much more similar to an Apple or a Facebook is a little bit reflective of the ambivalent attitude I think that many Silicon Valley titans have towards rationalism and towards AI.
You say, is it a look at the psyche? You know this sci-fi story that Yudkowsky is trying to tell about AI conquering the world. If you switched out a few proper nouns, there is also a sci-fi story about software conquering the world. There's also a sci-fi story about capitalism conquering the world, and to me, I think there's a kind of projection going on, that people are attracted to these stories because they can see a version of it happening somewhere, but have trouble facing the way that it's happening already.
BECKER: But these are very prominent people though that we're talking about that are familiar with. We can at least say and perhaps adherence to these beliefs.
READ: Yeah. Again, this is an interesting kind of shift that's been going on in Silicon Valley for the last few years, where there are some very prominent, especially AI researchers, who are bought and paid rationalists who really, truly believe in the idea of an oncoming AI singularity that could in fact be disastrous to the human race.
And for a long time, I think that has been a useful fiction for people like Altman or Microsoft, CEO, Satya Nadella or Elon Musk for that matter, that there is, it's almost like a marketing tool. We're so close to creating a God in our computers. It feels science fictional and futuristic and crazy. Now that we're at a point where maybe we can start making money from AI and you have all these guys flapping their gums about how it might actually kill us all.
It's become very inconvenient for rationalism to be, to have as prominent a place as it once did. So you see corporate drama like at OpenAI 18 months ago when Sam Altman was briefly forced out over precisely these kinds of debates and discussions. Rationalism is still enormously influential in Silicon Valley, but I think its influence is maybe waning a bit or it's shifting a little bit as artificial intelligence becomes more prominent as a profit center for the industry.
BECKER: Now you mentioned the role of effective altruism before the break. So I'm wondering if you could tie that in a little bit for us and explain, is that basically the money backer of a lot of rationalist thought
READ: To some extent. Effective altruism, as I think I was saying before the break, has branches that are much more focused on things like mosquito nets to prevent malaria and really specifically trying to get, save the most lives as possible per dollar. But there is a kind of heavily abstract version of it. Maybe most famous with the philosopher will MacAskill and his idea of what's called long-termism, which is effectively that you should be putting your money towards and your charitable and philanthropic efforts towards not just people who are alive today, but people who will be alive in the future.
That those people deserve our consideration, perhaps more than the people who are alive today. And again, this is a similar sort of abstract, philosophical game that ends up with real world consequences. Sam Bankman-Fried famously believed in what's called earn to give, which is that you should make as much money as possible in order to donate as much money as possible and channel it towards your preferred effective altruistic charities.
And it's not that much of a leap to take that seriously and then decide what's a little bit of fraud, if it means I'm making more money in order to donate it to even better charities, to even further these goals. The ends as in any kind of form of utilitarianism, you run up against the question of whether or not the ends justify the means very quickly.
And for a lot of people like Sam Bankman-Fried, the ends very clearly justified the means.
BECKER: And so when we were talking about this group, the Zizians, explain to me then the split there. Was it because even in those years when the Zizians started gathering some momentum, that it was already clear that rationalism was waning or not necessarily.
READ: No. So around the time that Ziz really broke off I would say as a kind of hedge to all this, is that this is a, the really hardcore group of rationalists is a tightly knit, and I would say emotionally intense group of people. And I think that to some extent splits like this happen just because personalities simply don't align.
And I think, given what we know about Ziz, it's maybe not surprising that she didn't align with a lot of other people that she had trouble meshing and gelling, but really specifically her initial complaint around where she protested with her friends and followers and handed out pamphlets, was the sense that there was a sort of a rot at the heart of rationalism, which was connected to what were at the time quite serious rumors about sexual assault, rape, even pedophilia, at the upper echelons of the organization.
And interestingly, Ziz was not quite saying, this is all bunk because these people are creeps and predators and awful. She was saying the fact that these prominent people are being accused of being creeps and predators is going to undermine our movement.
This is a complicated issue. But it has come out since then through a lot of meticulous reporting that there does seem to be a kind of endemic sexual harassment problem in rationalism, that there are a lot of people who are being preyed on in one way or another, and that I think that has maybe helped a little bit of the fraying that has occurred and the waning of the influence.
BECKER: And so have there been other split off groups besides Zizians that have also said there's a problem here and I would like to take some of these tenets and continue to work on them, but in a different way with different leadership perhaps?
READ: Yeah, I think definitely there are lots of breakaway groups or people who maybe would still call themselves rationalists, but as I said, don't buy Yudkowsky's whole AI thing. There are also a number of groups that could be credibly abused of being cults or cult-like, in the same way the Zizians are. There have been a few different essays written. In 2021, a woman in Zoe Curzi wrote a post about a nonprofit called Leverage Research, which was a sort of rationalism adjacent group that featured what they called debugging sessions.
Where they would articulate demons inside their psyches and then flush it out of their systems using so-called debugging tools they were expecting to take over the U.S. government. That sounds very much like a cult to me. A woman named Jessica Taylor, who has also written a lot about the Zizians and who is involved in two of the most prominent rationalist groups, the Machine Intelligence Research Institute and the Center for Applied Rationality.
Wrote about her experiences with those two groups, again, involving these sort of debugging, deprogramming ideas, feeling like she was being isolated from her friends and family because she was working on these cosmically high-level problems. There's a person named Michael Vassar who has been accused of cultivating a cult-like atmosphere around himself.
You don't want to over rely on the idea that these are really specific sects or cults that are creating armies and have bunk beds in some basement somewhere. But there's obviously some dynamic at the core of what's going on in the rationalist movement and at the very heart of the rationalist movement that gives rise to these kinds of cult-like formations.
BECKER: And what dynamic is it, would you say it's a psychological dynamic almost of the members who think of leaving conventional societal thought and come up with their own ideas? And there may be some weaknesses there that are exploited or manipulated.
How do you describe what happens here?
READ: Yeah, I think that rationalism itself, the tenets of rationalism tend to encourage a susceptibility to cults. It's the kind of thing you say if you want to be a rationalist, you need to be curious. You want to, you need to want to improve yourself, you need to be insecure about your own sort of epistemological, ontological frameworks.
You need to believe that there maybe is a higher, different, better rational truth that you don't have access to yet. You need to be able to pursue that, and then also you need to be able to follow that, the conclusions that you reach, you need to be able to follow them to their fullest extent, so to speak.
And again, these are all, these are qualities that I think in the abstract or in the individual, we might say it's good. It's good to be curious. It's good to be epistemically humble and to say, I don't really know everything. But you take it all together and you put it in a place, you put around a bunch of people who feel like they've really figured it out, who feel like they understand what's coming.
And you've got the breeding grounds for this kind of cult, that you've got a bunch of people who can be talked into things. And I think there's a sort of historical context here, which is that California and especially the Bay Area, has for a long time been a breeding ground for seekers and searchers who come from all over the country, who arrive somewhere.
In these sort of vaguely self-help, vaguely political, vaguely whatever groups that turn out to be indistinguishable from cults more or less, and sometimes quite violent cults. So I think that part of what's going on, I don't think it's an accident, for example, that Ziz's group seems to be composed of a majority of trans people.
Not to say that this is like inherent in any trans person. But if you run away from home, maybe because your parents don't really accept who you are and you arrive in a new place, you are struggling with questions of identity. You are struggling with questions of belonging and family.
You're very intelligent. You're a programmer, you're computer interested, you're interested in philosophy. You can see how finding yourself in the wrong place at the wrong time. You might end up with a bunch of people who are gonna take advantage of you.
BECKER: So you think that's a common characteristic here?
It's a characteristic of the group members more than anything else?
READ: Yeah. I think of maybe the people who are susceptible to this. I think the other version of that is people who. probably always would've been a cult leader of some kind, the movement is also very friendly to domineering, charismatic, I'm the right, I know what's right, kinds of people who can walk in and command attention in that way.
I think that it's the combination of the psychological profile with the particular tenets of belief that sort of is a difficult brew to be a part of and not find yourself falling prey to cult-like ideas.
BECKER: But typically, doesn't there also have to be a fear, right? A fear of something happening if you don't go along with this kind of thinking. And so is the fear, the destruction of the world by AI and is that a concept that everyone believes and really wants to work hard to avoid?
READ: I think that's the main one.
Yeah. I think with Ziz, the fear is also about, I mean is about animal genocide effectively animalicide, there's a fear that we're all participating in some unbelievable crime against sentience that we need to push ourselves out of. But AI is the sort of dominant, some version of the AI fear, I think is the dominant fear around the campuses, around the sort of seminars and groups that we're talking about. And I think that fear is also, has a sort of positive side, which is the belief that what you are doing is so important that you can put everything else aside.
You can put your family and friends and previous connections and job and everything else aside in order to focus on this thing that's going to destroy us all.
Part III
BECKER: We're talking this hour about the debate over AI research and rationalism. It's a movement focused on protecting society from various things, including runaway artificial intelligence. Now, this idea of rationalism was introduced to Sonia Joseph when she was just 14 years old.
That's when she stumbled on an online post by Eliezer Yudkowsky called Harry Potter and the methods of rationality.
SONIA JOSEPH: So it's the same setup as the original Harry Potter, except in this version Harry is supposed to be like a hyper genius. He's like a child prodigy, so he uses his intellect, but also these principles of rationalism to make his way about the wizarding world.
And like it's very much like a fiction that values reason, agency, intellect like thinking from first principles. So you'll have these experiments where like Harry and Draco will try to run these scientific studies as to whether muggle borns are actually inferior at magic and like they discovered that they're not.
BECKER: Sonia loved the fan fiction and she wanted to learn more. When she got to college around 2013, she found a rationalist community that met online and in person to talk about AI research and other ideas. So she started going to rationalist get togethers. After she graduated, Sonia moved to San Francisco for a job in AI research.
A lot of her roommates were rationalists.
JOSEPH: One of the houses I stayed at, it was a Victorian mansion near Alamo Square. You enter and there are all these rooms and there's often like a common area and people will gather on the couches in the common area and all talk about AI killing us. A lot of these, a lot of the roommates would work at the two major AI labs there. These houses would often become like professional networking grounds for breaking into AI. We'd invite speakers to come over and give talks. We would often like host parties. There would often be like drug use at these houses, but it's always framed under we're gonna use LSD or ayahuasca to explore consciousness.
Like it is not framed in like a party animal rave kind of way. It's framed in like an intellectual, cerebral way.
BECKER: And this world, Sonia says, can become all consuming.
JOSEPH: So if you're living in a house with a bunch of other AI researchers who all believe that we're going to die in two years, and that's like the only thing you're surrounded by, it's 24/7.
There's no escape. And like work and life are very blurred. And of course you have like billions of dollars flowing into this ecosystem. So like power dynamics and like relational dynamics all get blurred up in a way that I think is unhealthy at best and at worst, it can actually lead to these strong cult-like dynamics.
BECKER: Even though she spent a lot of time with rationalists and takes AI safety seriously.
Sonia says she considered herself outside of the rationalist community. She says she didn't like the way she felt and the way she and other women were treated by the community. For example, she met one rationalist who told her that he was the inspiration for the Voldemort character and the rationalist Harry Potter fan fiction.
JOSEPH: He would go on to say some pretty concerning things to me when we got dinner. Like you need to think from first principles. If you think from first principles, a lot of human society doesn't make sense. Things like the age of consent is actually like way too high. Like relationships with young girls, like 12-year-old girls are actually normal.
It's like a normal transfer of knowledge. Stuff that like, I found like very morally concerning. But I think if there's a stereotype or among like certain like libertarian spheres, which overlap with rationalist spheres, you can derive morality and a lot of things that we take for granted is like human rights become open for debate again.
BECKER: Eventually Sonia decided she had to get away from Silicon Valley, as she tells that she wanted some distance from some of the culty behavior. So she moved to Montreal, where she's now a visiting AI researcher at Meta and working on a PhD. She says many rationalist ideas have become well known in the past few years, but for her, they don't have the same appeal.
JOSEPH: Because 10 years ago, it was so niche. It's mom, I'm talking to this like fringe cult on the internet, but all these ideas have become so mainstream just because AI has become mainstream. And in my opinion, this is like largely a good thing because it's harder to be culty when you're mainstream, but I often do miss the esoteric aspect of it. It was like being part of a secret society or something.
BECKER: That was Sonia Joseph, she's an AI researcher who waded into the rationalist community. Max Read is with us today, he's a journalist and author of the Substack newsletter, Read Max. And Max, I wonder, do you think that's a typical experience that we just heard from Sonia there about getting involved in this community and perhaps becoming disillusioned by it?
READ: Yeah, I think it is. As I said there, especially around the pandemic, there was a number of people who came and wrote pieces about their experiences, and I think especially young women who became involved and found themselves subject to exactly the kinds of stories that you just heard, recognized that maybe there wasn't quite the sort of pure pursuit of rational ideas at the heart of what was going on, but the same kinds of messed up social dynamics that they were experiencing elsewhere.
BECKER: I know we spoke a little bit about this before the break, but what about the psychology here? This idea of perhaps feeling special in a way, it was in the Harry Potter fiction, fanfic about this.
It was this idea that you're special, you're chosen to help save the world from this awful thing. I wonder how that idea of importance plays in here.
READ: I think one thing I would say is I think the kind of person who becomes attracted to this is likely a really intelligent person who maybe has had some trouble in social situations in their schooling, so that you are maybe outpacing your peers in math or science or reading and you're having a little bit of trouble getting along with other people and you come across this version of Harry Potter, say, that speaks to the way you think about things.
And tells you that you are not the weird one necessarily. You're not the crazy one. I'm not saying this is a universal experience necessarily, but a version of that can happen in all kinds of ways. And that is built upon by the structure of rationalist thought, where it's not, you are the special one, you are the Harry Potter, so to speak, who is able to see the world in the right way, who's able to see the sort of the rational structures undergirding the world.
And you can be part of this group of people that's going to save us all from destruction by AI. That's a really powerful story to hear, especially if you're hearing it at a time when you are otherwise not particularly valued or you're not feeling particularly valued yourself.
And I think that's an obvious way that people could get swept up in this stuff.
BECKER: But many times in groups like this that are described as being, they have ways to keep people in line, to make sure that they don't speak out and say things that the group doesn't want out in public things.
About the sexual harassment that you talked about, and Sonia talked about and things like that. So what are the methods that they use? Is there anything, how do they retaliate if they think something is not accurate or goes against what the group is trying to achieve?
READ: There's a few answer ways to answer this question.
One is that I think for people who are, who notice say sexual harassment and feel compelled to speak out about it, but maybe still buy into the basic tenets of rationalism. The leverage is, if you speak out about this, you are going to destroy our mission of stopping bad AI. So you need to keep quiet because otherwise it's just the whole thing is gonna fall apart.
Another thing is, I mentioned this before the break, a lot of people end up isolated from their friends and family because this work becomes, so-called work becomes so all-consuming to them that they get so deeply involved that they don't really have anywhere else to turn.
So the leverage there is less a threat than it is the kind of sense that a person maybe has been so fully isolated that there's no one to complain to or nowhere to make a change from.
And then, I think the sort of final problem here is just that there's, once you're partway in, it's very hard to pull yourself all the way out. That if you enter this, because you already think of the world in this kind of partly rational way, that exiting it and seeing exactly all the connections that have messed you up, so to speak, is really hard to, that's a really hard thing to accomplish.
BECKER: What would you say maybe replaced or now compliments rationalism in Silicon Valley, if anything, if this was a big tenant and as you said it's waning a bit, and there are lots of folks questioning exactly what's going on here.
Has anything taken its place?
READ: Not precisely. I think that AI as a business endeavor is so all consuming in the Valley right now that there's just a lot of excitement for it. Maybe less systemically oriented, but that people are rushing to find jobs and startups and niches that they can fill.
In what is hoped to be a real gold rush. You're seeing the emergence, we're getting a little in the weeds here, and I apologize to viewers who don't want to hear about it, but the emergence of new groups of what people who call themselves post rationalists, who maybe have some of the same concerns as rationalists.
The same ideas about wanting to push forward through blinkered conventional wisdom and find different ways of being, but are trying to leave aside the Yudkowsky-ian and nerdy message board ideas and focus more on esoteric philosophical ideas and living in the world.
All of it is in flux and I don't, I would never count out the rationalists, let's say, because I think that the idea of looking past a false world into a fully rational good one holds a lot of appeal to the same kinds of people who become programmers and engineers. And to that extent, there will always be rationalists of some breed in Silicon Valley. And I think a lot is gonna depend on how the industry goes, where the money is coming from and where it's headed over the next 10 years.
BECKER: And I guess with all of this money and power concentrated in AI and this movement, what do you think it means for the people who are doing AI research, who are building AI right now?
READ: I suspect that there are even some of them listening right now, feeling frustrated that I haven't said that probably the large majority of AI researchers both academic and private.
And those working at private companies are not rationalists in the hardcore sense that they don't really believe in an oncoming homicidal machine God. ... I think that some of them have in fact been frustrated by the dominance of rationalist discourse in their fields over the last 10 years and probably, the waning of its influence that they can pursue their research questions or their startups without having to constantly be thinking about whether or not Yudkowsky is gonna claim that AI is gonna kill all of us at any given moment.
BECKER: And that's what he's saying. Stop building it.
READ: Yeah. And I will say like journalists like myself, that is really good copy.
And so it's really nice to write about it. It's really nice to say something like that. But then you go and you talk to people who are deep in these systems. And I also think as all of us start to get more familiar with the capabilities of programs like ChatGPT, it becomes more and more clear how much of a leap of logic it requires to go from the thing that will produce text for you on a whim, to the thing that is going to put us all in, I don't know what, jail or destroy us with nuclear weapons or whatever.
And so that that kind of hyper, that overstated apocalyptic case, just because it hasn't really panned out. Not only does it mean that it gets taken somewhat less seriously over the years, it also means that people who are working on other areas who have other concerns, or other ideas about how AI might work and might be implemented, or given a little bit more space for themselves, for better and for worse. That doesn't necessarily mean we're gonna get harmless AI that's gonna do good for everybody. But maybe we could also think less about the coming apocalypse and more about the way AI systems are implemented, say, in day-to-day politics.
The way DOGE is said to have used certain kinds of AI systems to try and put together its cuts, that's an immediate concern that is too easily drowned out by the kinds of Yudkowsky and longtermists, watch out for this thing that's gonna turn us all into paperclips concerns.
BECKER: What do you think that this idea of rationalists and the Zizians as a subset of rationalists, that really shed a light because of the sensationalism involved on rationalist thought and this community?
What does it mean for average people? Do you, you know what, really, when you're writing about this, you're spending an awful lot of time that you're devoting to understanding this. What does it mean for the rest of us who are not in the AI research community, what message do you take from it, Max?
READ: I think that it gives you a window into how far out a certain segment of the AI research community has gotten and how far out a certain portion of the software industry has gone. And I think that's a really important insight to have, because it means that when you hear somebody like Yudkowsky saying, we need to stop the production of GPUs, we can't possibly let AI develop anymore, you are able to come to that with a slightly, with a recognition that this is a guy only a few degrees removed from a murderous cult, like a strange murderous cult across the country. And it doesn't mean that we don't have to hear Yudkowsky out, but I think that we can contextualize a little bit better where he's coming from.
If we understand that there is a social, psychological, cultural, historical dynamic at play that is leading towards a particularly apocalyptic, almost religious view of artificial intelligence that is just really not very likely to be born out.
BECKER: Really, you don't think that there's any chance that we have to worry about some kind of super intelligence making humans extinct?
READ: No, and certainly not one based on LLMs. I wouldn't say, again, I've used, I've been using ChatGPT enough to really not be very worried that it's got one over on me.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.
This program aired on June 30, 2025.

