Skip to main content

Support WBUR

Why the tech world is ‘tokenmaxxing'

35:35
Macro of programming coding in screen.
Macro of programming coding in screen.

There’s a new word floating around Silicon Valley and the AI world: 'tokenmaxxing.' It means consuming as many units of AI as possible and often racking up multibillion dollar bills. What’s driving this behavior?

Guest

Tim Fernholz, senior reporter at TechCrunch.

Brian Elliott, CEO of Work Forward, which advises companies and leadership on the future of work.

Also Featured

A, engineer at a Fortune 500 tech company. She requested we just use her initial because she is not authorized to speak by her employer and fears retaliation.


The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:


Transcript

Part I 

MEGHNA CHAKRABARTI: Gen Z listeners, if there is someone over the age of let's say 30 listening with you right now, please turn to them and explain the definition of maxxing. Because these days we have looksmaxxing, gymmaxxing, sleepmaxxing, frictionmaxxing, personalitymaxxing, lifemaxxing, Tmaxxing.

That one's for testosterone. Stylemaxxing, moneymaxxing. The list goes on and on. Now, since this is public radio, you also have to allow me to indulge in a little bit of lexicography or nerdmaxxing.

Because maxxing dates back to the 1940s and a game theory concept called Minimax Theory, that later evolved to role-playing games when players would adopt a minimax theory or minimax strategy where they dump all of their available resources into one single tactic, essentially maximizing effect in the minimum number of resources. Okay. Then come the 2010s and the incels. A.k.a., a toxic internet subculture of young men who believed they were quote-unquote involuntarily celibate. Now that culture is epitomized by self-victimization, extreme misogyny, and a belief that human interaction can essentially be reduced to a game where those men are unfairly disadvantaged.

Hence the incel application of game theory to actual life through the evolution of a new language on the internet. And here we get the first hints of looksmaxxing, or a lifestyle where a person's entire being is focused on one thing, increasing his supposed attractiveness. Maxxing then crawled out of its dark 4chan and subreddit holes around 2021. TikTok and Instagram were suddenly full of people maxxing on all sorts of things, to the point where we now have beachmaxxing, potassiummaxxing, nothingmaxxing ... and even the Department of Defense social media account deployed the term lethalitymaxxing.

So clearly the term has embedded itself into the mainstream now with Merriam Webster's Dictionary even defining maxxing as the practice of optimizing a specific aspect of one's life, often to an extreme degree. Frankly, yeah, I've grown a bit sick of the term. Because I think it's been used into meaninglessness or maxxingmaxxing. But fortunately, since internet cultural cycles tend to run down fairly quickly, this Gen Xer is hoping maxxing will go the way, I'm about to say this, that maxxing will go the way of 6-7. Anyway, returning to a professional tone of voice, it is the optimizing for a single purpose and even simply the concept of optimizing at the cost of everything else.

That also has its roots in another part of the digital world, and that is Silicon Valley. I just happened to be there last week, and the term gets thrown around almost as often as people seem to breathe. It should come to no one's surprise that a lot of Silicon Valley companies now, especially the ones in AI, they have a new obsession.

It's called tokenmaxxing. Tim Fernholz joins us now to help us understand what that means. He's a senior reporter at TechCrunch. And Tim, welcome to On Point and thank you for On Point maxxing with us today.

TIM FERNHOLZ: Always. Thank you for having me.

CHAKRABARTI: That was the last time I will do that. I promise I just couldn't resist.

Okay. What's a token?

FERNHOLZ: A token is the fundamental sort of unit of information in a large language model. Like the models behind ChatGPT these are big pieces of software that are designed to output text. And so the fundamental tiny piece of text that you get out is called the token, and that's like a syllable, maybe half of a word or a small word, and that is the output of all the AI models we're talking about.

And for this specific purpose, it is a way to measure the output of AI coding tools.

CHAKRABARTI: Okay. So how much work someone is asking AI to do in coding. Okay. Got you. And so then, is it just literally measured in the number of tokens that are outputted?

FERNHOLZ: That's right.

CHAKRABARTI: Okay. And so who are using these tokens or who's using tokens as a measurement of AI output right now?

FERNHOLZ: Anybody making AI models like OpenAI or Anthropic or Meta or Google uses tokens as a way to understand what their models are doing. And so if you are asking ChatGPT questions online, you are getting responses and tokens and they're measuring that. But where the rubber is hitting the road are new tools that help software engineers write programs because that has emerged as the killer app thus far in AI. The thing that people wanna spend money on.

CHAKRABARTI: Okay, so then take us to tokenmaxxing. Obviously from what you said, it sounds like it's a approach that would want to optimize the number of tokens that an engineer or a coder produces using AI.

FERNHOLZ: It's the word produces that's tricky, because we're talking about inputs and outputs. There are new coding tools like Cursor, like Claude, Codex, Microsoft's Copilot where a software engineer can turn to a computer program and say, write me a different computer program to do X, Y, Z.

And it will do it; it will produce all of that code. And that code is measured in tokens. Software engineers, a lot of them are very excited about these coding tools, or very frightened of them, but they are being integrated into their workplaces, and their bosses are saying, oh, you have a magic tool that makes software.

You need to make a lot more software now. And the people who are most excited about that are like, oh, making a lot of software with these coding tools requires using the most tokens and I'm gonna tokenmaxx and make the most new software.

CHAKRABARTI: So it's like an analog or perceived as an analog for productivity.

FERNHOLZ: So that is the problem. It's perceived as an analog for productivity, but it's measuring an input. So you could think about tokens as, you know, what the coders are getting, paying for and then using, but it's part of their process. It's not actually the thing that they're turning around and selling to people.

CHAKRABARTI: Okay. Paying for, explain that.

FERNHOLZ: Yeah. In the world of coding tools, this is a service that companies have to pay for. That Anthropic and the other Frontier Labs are offering. So they might have an enterprise plan where they paid $20 or $25 a month per software engineer, but those plans are limited to a certain amount of tokens.

And if they go over, they start to pay more on a la carte basis, basically.

CHAKRABARTI: Okay. Wait, so maybe I was, maybe I've got this backwards then. The token is what the coder inputs into the AI platform.

FERNHOLZ: No. So the token is a way to measure the activity of the AI coding tool. So when you say, Hey, OpenAI, Hey, ChatGPT tell me, you know, what a dog is.

And it says, A dog is a mammal with four legs. Each of those syllables, each of that is an output from that model. It's called a token. Okay. And you have to pay for that. So if you are saying, Hey, model write code for me, and it writes a hundred tokens of code, you have to pay for a hundred tokens.

CHAKRABARTI: Okay, great. But for the companies that are paying for, essentially, as you said, these like Anthropic AI to do work for them, those executives at that company see that as a good thing. Because they are maximizing what the AI can do for them regarding creating new code.

FERNHOLZ: Yeah, so in software engineering it's a very competitive business.

People are obsessed with productivity, with beating out their competitors, and so if they have an AI coding tool and they think, oh, this will let our software engineers be 10 times more productive. 10 x is a key phrase in this world. We need to make sure everyone is using these tools and everyone is being 10 times more productive.

And so we are going to reward and incentivize people for using the most tokens.

CHAKRABARTI: I see. Okay. Got it. So it's like gamifying the process so that it encourages people to essentially use AI more.

FERNHOLZ: Absolutely.

CHAKRABARTI: Okay. So describe to me what this culture looks and feels like in some of these companies.

FERNHOLZ: So as always, it is difficult to separate the experience of social media from reality. So you will see a lot of people, particularly people who see themselves as influencers in the AI space, talking online about tokenmaxxing, and I use this many tokens and I'm doing all this and I have 10 AI agents working overnight.

There's this sort of cultural thing about it, and then there's a reality of business practice. Where at these large software companies, managers are now tracking way more intently the productivity of their workers and saying, how many proposed changes do you have for our code base? And oh, do you have five times as many as last month?

If not, you better start using these coding tools. And so early on we saw tokens as maybe one metric that they use to encourage AI adoption, but increasingly they're focusing on measures of how many changes are you proposing to the code base as a more sort of raw productivity measure. Not we should get to this raw output measure.

CHAKRABARTI: Okay. So yeah, we'll talk about that in a second because exactly what it's measuring is really important. But I understand that in some places there's even tokenmaxxing leaderboards.

FERNHOLZ: Yes, there are some companies that have like on their internal wiki or whatever, a real-time dashboard with their engineers or engineering teams with either how many tokens they're using or how many pull requests which is the term for proposed changes to the code base, to try and get people who are not adopting these tools to adopt them.

CHAKRABARTI: Okay. Tim, I'm a very big believer in healthy competition. I actually think it's extremely motivating. But there's something about this that has a little bit of a broey feel.

FERNHOLZ: You're not wrong. There is certainly a culture in Silicon Valley around AI, around this maxxing theme where anything you're doing that seems to be good, you should do it to the extreme.

CHAKRABARTI: And so is this really taking off in a meaningful way across Silicon Valley right now? Or is it still something that's in maybe just a couple of companies?

FERNHOLZ: No, this is something that is definitely spreading across Silicon Valley because it is ultimately something that emerged somewhat organically from like software engineers.

And if you talk to people who use these tools, even as hobbies, I built a little program that I've always wanted to build, or I finally built the prototype for a startup that I want to launch, and I was able to do it with these tools. There are aspects of them that are very exciting and people want to adopt them.

And there is a sense that this is the future of enterprise software engineering.

Part II

CHAKRABARTI: We did actually speak to an engineer at one of these companies.

We're referring to the engineer as A, and she works at a major Fortune 500 tech company and A told us she rarely even writes code anymore.

A: My job now is more of writing a prompt to an LLM to write code for me or manage AI agents to do an analysis of data. So my job is really like verifying the code that the AI writes. They're tracking what percentage of code that is shipped to production is written by AI. And the higher that percentage, the better.

CHAKRABARTI: Now I want to clarify some things we're referring to A just by her initial, because she is not authorized to speak to the media. Her employer does not authorize it and A fears retaliation and even possibly losing her job for having spoken to us.

So also at her request, we're not using her voice. Her words are being read by a WBUR staff member. But they are the words that she told us in our interview with her. Now, A told us her company's leaders believe that the more people use AI, the more efficient and productive they'll become. She's heard phrases like 10x productivity.

You heard Tim talk about that before, or even 100x productivity.

A: They say that they want our productivity to go up and our efficiency to go up, but they don't really have a way to measure that. What does even 10x productivity mean? 10 times the code? Because the code is more bloated now. Any code that is shipped into production is mostly more than it would've been if written by a human.

So we have more code, but that doesn't mean that we're more productive.

CHAKRABARTI: And a says she does not like how her job has changed.

A: So I know many people are very enthused about AI and really excited to have AI write all the code for them and do all these things for them, and I don't share that same enthusiasm.

I don't quite enjoy the job the same way that I did back then because the draw to it for me was solving problems, whereas now I'm babysitting AI.

Now I'm babysitting AI.

A, engineer at a Fortune 500 tech company

CHAKRABARTI: So that's A, who works at a major Silicon Valley Fortune 500 tech company. Tim, actually, she hit on something right there that you were hinting at earlier what does productivity mean?

And A just told us there's more code coming out, but it actually doesn't mean it's better. And in fact, it's maybe less elegant than if it was written by a human.

FERNHOLZ: That's right. And so these coding tools hit Silicon Valley, let's say, last summer, in the summer of 2025, adoption ramped up. People started using them.

We hit tokenmaxxing maybe a few months ago, but now we're starting to see what tokenmaxxing actually does. And so there are various reports out there. For instance, the CTO of Uber said, oh, we blew through our AI coding tool budget for all of the year in just a few months. So that's suddenly we're spending a lot more money.

And then there are companies that track the productivity or the work of engineers at big organizations. And what they're finding is there is a lot more code being generated and accepted, but there is also a lot more code being rewritten and a lot more time spent reviewing code and figuring out how it works.

One metric is for code, which is lines of code deleted in the, like, two weeks after it was first written, has increased 861% under AI adoption. So that's one example of we're getting a lot more code, but it turns out we need to do a lot more work. Understanding it, reviewing it, and fixing it after it is created.

CHAKRABARTI: And so the people doing the understanding, reviewing and fixing. I just said people, are they people? Or the are we asking AI to do more of the reviewing?

FERNHOLZ: So you will not be shocked to know that there's a lot of interest in getting AI agents and AI tools to do more of this review. But ultimately things break or they don't work, or these models make mistakes and it comes down to the humans to figure out what is going on and make sure it gets integrated into the code base.

And there's another aspect of this, which is that software companies, it's not just the engineers. There are user experience designers and product managers and salespeople, and all of these other sort of inputs that go into software production that are not just code. And now those processes are being overwhelmed by this flood of new lines of code.

CHAKRABARTI: Okay. Now, Tim, you had said a little bit earlier that there's the enthusiasm for, I said I wouldn't do this, but here I go maxxing. The tokenmaxxing culture within these companies is coming from the very top. And we have an example here. This is Jensen Huang, CEO of Nvidia. Of course, they are the AI chip maker who's, by the way, current market cap is a cool $5.17 trillion.

Anyway, Huang was on the tech podcast all in last month and he said companies should be spending a lot on AI tokens.

HUANG: Let's say you have a software engineer or AI researcher and you pay them $500,000 a year, that $500,000 engineer at the end of the year, I'm gonna ask them, how much did you spend in tokens?

And that person said, $5,000. I will go ape something else. Yes, right? If that $500,000 engineer did not consume at least $250,000 worth of tokens, I'm going to be deeply alarmed.

CHAKRABARTI: So that's Jensen Huang, CEO of Nvidia on the tech podcast All In just last month. Tim, first of all, respond to that. He wants a $500,000 engineer, a.k.a. that's what that person make,s to be consuming half their worth in tokens.

FERNHOLZ: So the most important thing to understand about that comment is that every time someone consumes a token, Jensen Huang and Nvidia make money.

Every time someone consumes a token, Jensen Huang and Nvidia make money.

Tim Fernholz

CHAKRABARTI: Exactly.

FERNHOLZ: He's selling right now.

CHAKRABARTI: He wants those chips to be sold.

FERNHOLZ: Yes, he does.

And there's nothing wrong with that. But you gotta think, okay, why does he think this is important? And for me it's an analogy like, I pay a bus driver a salary, $100,000. Whatever you pay a bus driver, you don't look at their gasoline consumption and say, oh, you should have used more gasoline.

No, you want them to use the right amount of gasoline to drive the bus where it has to go. And so now these software companies are trying to figure out, oh, what is the right way to think about token use and how do we measure the actual return on that investment and not just the cost of buying all the tokens.

Okay. So that is a critical question. And Tim, hang on here. It's the perfect time for me to bring in Brian Elliott. Brian is a CEO of Work Forward, which advises companies and leadership on the future of work, and he's in San Francisco. Brian, welcome to On Point.

BRIAN ELLIOTT: Glad to be here with you guys.

Fascinating and great conversation.

CHAKRABARTI: So let's, in terms of how tokenmaxxing reflects on what the culture of work is in places like Silicon Valley, which eventually, oftentimes filter into other sectors across the country. What's your take on tokenmaxxing and what it says about work these days?

So part of this just comes back to the fact that Silicon Valley does tend to trickle out. And what trickling out isn't necessarily tokenmaxxing, but you'll see a lot of companies talking about tracking AI usage, mandating AI usage, and that's going to count against your performance evaluation as an example. And applying that broad brush stroke of, Hey, we're getting a lot of pressure around this from our boards to adopt AI, and so therefore I expect everybody in the company to start using this and to use it.

Otherwise, it goes on your permanent record. And that sort of mantra sounds familiar to folks, maybe, because it's the same thing that we heard coming out of the post pandemic return to office type of thing, which is how many days a week you're in the office is going to count on your permanent record. Also, in either case, which you end up with is people start performing to the metric, not necessarily what you want, but you get activity out of it.

CHAKRABARTI: Okay. So when you say permanent record, what you're saying is that people are being evaluated on just their token use.

ELLIOTT: That's right. So their performance evaluation, you'll see this out of a growing number of companies that will say, Hey, look, in order to adjust for the value of AI, we want to make sure everybody's using it.

So therefore, your performance rating will be dependent on your usage. We're going to consider that as part of it, or in some cases, in some companies, especially mostly in tech today, but starting to become more of a contagion outside of tech. You can't get a promotion if you're not using AI extensively.

And the initial focus on this was in engineering, but it's starting to spread in some places as well.

CHAKRABARTI: Okay. So as you've heard already, I do indulge every once in a while in mocking Silicon Valley culture a little bit/a lot. But I also at the same time want to be clear that I respect the amount of development and intelligence and forward looking the culture there of like really trying to change the world.

So I'm not saying that they're not smart people there, but Brian, there's something that you just said to me that seems completely backwards, right? That like we're going to evaluate employees on their token use because we've already dumped an X amount of money into using AI. So it's like the work becomes then justifying the endpoint decision rather than the work being, and we're going to create something new using the AI tool.

ELLIOTT: Yeah, absolutely. And you're seeing this because there's so much pressure going on. There've been a couple of studies in the past year from BCG had one, for example, that said that something like 55% of CEOs feared that they were going to get removed by their boards, that they didn't get real results out of AI in the next couple of years.

So that pressure ramps up. They then turn around, spend large sums of money on AI tools. And they want to see something for it, right? And because as Tim noted already, it's really hard to measure outcomes, but it's not that hard to measure activity. So people fall back on what's easy. I'm going to insist on activity; I'm going to insist on measuring that you are taking up these tools and using them.

And what you get out of that is you get usage; you get people to perform to the metric. That doesn't mean that it actually changes anything about the business. And it actually doesn't even mean that they're using AI in ways that are actually productive.

CHAKRABARTI: Tim, this happens a lot, Tim. Tim Fernholz, let me turn back to you because I think not just Silicon Valley, they just may be really good at this, but everyone has a kind of measurement bias, right?

We tend towards valuing the things we can actually measure, because we can see today it was worth $10 and tomorrow it's worth $12. Like I can see that, but it doesn't. But that measurement bias does not always track with an actual improvement in the product. Is measurement bias a common thing in Silicon Valley?

FERNHOLZ: Oh, sure. There is a phrase that's not from the tech sector, but it's something like, once a metric becomes a target, it ceases to be a useful measure. And so anytime you see these companies set up a target for something, maybe the most common one we complain about is like attention and engagement.

We see pernicious things result, where we see social media becoming, just trying to get reactions with extreme content to generate engagement, which is something that the companies try and create. But I think a lot of people would say, oh, that's creating an unpleasant experience online.

And so when we have, and this has been a struggle with software engineering since the inception of the field, is how do you really know if an engineer is good or not? For a long time, people focused on lines of code, but if you're a good programmer, you want to be able to do the same task much more quickly in a shorter program that's more elegant and efficient.

So that sort of metric went out, and now people on a practical level are struggling to measure the return on investment for these AI coding tools. So they're trying to figure out, can we measure rewrites? Can we measure churn? How can we actually put our finger on the return on these tools? And it's not yet obvious that they can or they figured that out.

CHAKRABARTI: Okay. Let me bring up a differing point of view here. We have some tape from Boris Cherny, who is the creator and head of Claude Code Anthropic. He was on another podcast called Lenny's Podcast in February, where he said engineers shouldn't actually have even any limits placed on the number of tokens they use.

Don't try to cost cut at the beginning. Start by just giving engineers as many tokens as possible. At Anthropic, everyone can use a lot of tokens. We're starting to see this come up as a perk at some companies. If you join, you get unlimited tokens. This is a thing I very much encourage because it makes people free to try these ideas.

That would've been too crazy. Then you can figure out how to scale it.

CHAKRABARTI: Okay. So that's Boris Cherny from Anthropic. So Brian Elliott, this is a really different POV essentially on this. And I actually think there's some merit to it because oftentimes people, let's say their creativity at work may be resource limited.

But if you're saying, we're going to give you this tool that has no limits to it, could that not possibly actually generate even better work?

ELLIOTT: Absolutely. And what Boris is getting at is a really key difference though. It's the difference between giving you the resources so you can do the job more effectively so that you can actually experiment and try new things and treating that as a measurement.

Here's the difference, and I've talked to the budget companies that have actually gone down this path. If you say to someone that what we're going to do is give you unlimited resources. We're going to let you experiment and try new things. We're also gonna support you at the time to do that experimentation.

That's actually the most critical factor. Then what you get is people trying out new tools, new technologies, building new capabilities, and often sharing them with their teammates. The opposite though is often the problem, which is when you start taking the leaderboard to its extremes, which is we're going to judge you on the basis of your usage of these things.

To Tim's point, the old lines of code written by developers, what you're then doing is in sending people individually on their own output. And you're saying, we're going to judge you on the basis of that. If you add that into what's going on in Silicon Valley, in some places more broadly, which is layoffs, it starts to get really dangerous.

So you're seeing this in companies. What they're doing is saying, Hey, look, we're going to judge your performance based on your usage of these tools. We're going to give you, we're going to take the bottom 10% of you that don't perform well and we're going to put you on a layoff list. We then get, as people then do the activity, they actually generate the output.

But they don't necessarily share it with their teammates. because that's not the incentive. The incentive is how do I perform individually? What I've seen in a bunch of companies is put that on its head. Give people the time, give people the resources, but make the team goals, the outcomes, the thing that you're going to reward them from.

And then what you get is what Boris had described, which is people that experiment, they share with each other. They actually teach each other how to use these things. They coach people and then you get real progress and you get things moving forward as opposed to treating this as a potential punishment.

CHAKRABARTI: Okay, Brian, this is so fascinating and by the way, just for folks who don't know, this has already been a very brutal year for Silicon Valley employees. I'm seeing that there's, what, maybe over 100,000 tech jobs. Actually, this is nationwide that have already been cut from January through where we are currently in April.

For example, Meta cut, what, 10% of its staff, some 8,000 people. We've seen similar big reductions at Oracle, Amazon, et cetera. So Tim, could we be having, basically Brian's describing now what seems to be a totally perverse incentive to top the tokenmaxxing leaderboards. So to save your job rather than to do good work.

And we just got about 15 seconds here before our next break.

FERNHOLZ: I think what Brian said is exactly right, that they need to figure out ways to encourage productive collaboration and experimentation with these tools. And I think Boris's point about, unlimited access is interesting, but it's important to remember that's like someone from the pencil factory coming and saying you really got to make sure your workers have unlimited pencils if you want them to do their best work.

Part III

CHAKRABARTI: Gentlemen, if I could, I want to return back to A, she's the employee at a huge Fortune 500 Silicon Valley company. And just to remind everyone, A is not authorized by her employer to speak with the media and she absolutely fears retaliation or even losing her job for doing so.

So we are using only her initial and in order to maintain her anonymity, we are using the voice of a WBUR employee who's reading A's actual words. Now, A told us that at her company, employees are required to use AI five days a week.

A: There are no specific requirements as to how much you use in those five days or for what tasks, but it has been made very clear that the expectation is to use AI for everything.

The expectation is to use AI for everything.

A, engineer at a Fortune 500 tech company

They are tracking AI tools to see how often we're using them and how many tokens we use. They also give us a score based on our AI usage. So the more tools you use and the more often that you use them, the higher your score is.

CHAKRABARTI: Now A told us the answer to a question you are probably wondering, and that is, it's not hard to use up a ton of tokens on useless AI tasks, but A isn't sure how many people at her company are actually doing that.

A: You can increase the context window. You can just throw in a huge prompt, tell it to read all these docs, and that will just increase the tokens a lot. I don't know if people do that intentionally to increase their token usage. I think it's more they're trying to increase their AI usage and are incidentally bringing up their tokens.

CHAKRABARTI: A says that while she and some of her colleagues are laughing about tokenmaxxing. Nevertheless, there is pressure and competition to maximize AI use.

A: It is definitely a bit of a joke around the company, but it is also somewhat real. So people are spending time building AI tools that are not necessary, maybe just for fun or just to get to learn the AI tool and to improve their AI score.

They're also just having AI do things like reply to messages because they can, and it improves your AI score and ups your tokens.

CHAKRABARTI: So once again, that's a software engineer at a major Silicon Valley tech company. And since she's not authorized to speak to the media, you heard her quotes read by a WBUR staffer.

Brian Elliott, respond to what you heard A say there. That there seems to be some evidence, at least in what she's seen, that these tokens are being used for basically non-productive tasks.

ELLIOTT: Absolutely. Tim talked about this. Goodhart's law says that humans will take any metric and measure and just act to it, right?

That they'll perform to it. That means they'll do good things with it. They'll do bad things with it. Another way to say that is play stupid games, win stupid prizes. You know, you're doing something because the boss tells you to do it and you're checking the box. There's a couple of other problems though that are starting to show up around that.

One of those is this concept of AI work slop. And that one's pretty pervasive. A lot of people, even outside of engineering, have seen this meaning someone asks you a question instead of actually thinking about it, trying to provide context, understanding what the question means, you just turn around to ChatGPT and generate an answer and send it back out to a teammate or your boss who then has the workload of sitting there looking at this really long message they've gotten back, realizing it's coming from AI and trying to figure out if it's right or not.

So that's one of the things that's starting to show up. The other, that's actually more pernicious in engineers in particular is what's being termed AI brain fry. AI brain fry is you're using these coding agents and you're using multiple of them at a time. If you're using one, you're going back and forth with it.

It's like having a teammate that's really smart and giving you back code that you're judging. You're doing two of them at a time. You're doing three. Some of these guys that are getting up leaderboards are doing five, or they're doing 10, and they're bragging about it, which is, which sounds really great until you realize what's happening is they're getting back really deep context.

They're getting back things that they have to think hard about, to judge whether it's right or not, and that cognitive load creates what gets called brain fry. That sounds like it's just a problem for that particular individual, which it is. Because they get burned out and they want to quit their jobs.

But the bigger problem is if they're more likely to miss something. They're more likely to make a major error. I don't know if y'all saw, there was a story last week about a major Wall Street law firm that had another one of these errors that showed up in court filings.

That's a different type of error, but those types of errors go up by 39% among people who have AI brain fry. And so the risks of telling people to go faster, to consume more, to build more with AI and to have the activity be the measure is you get not only activity, but you get mistakes.

CHAKRABARTI: Brian what that sounds like to me is, or at least it's akin to scandals that we hear about occasionally when hospital rooms are, weren't running multiple surgeries with one head surgeon, just going between the rooms and that actually does happen.

I live in the Boston area, and we've had those scandals around here. It just, they may be able to do more, but the giant fear is, are they doing it well? So Tim, let me ask you we're talking here about the threat of diminishing returns here with the tokenmaxxing culture.

Are there, I do have some measurement bias myself, but are there any ways to measure of whether or not the let's say the value that's being created by this tokenmaxxing is actually worth it? Because somewhere here there's truth in the belief that maybe AI can do some jobs better than a human engineer.

You talk to the people in this field, and they say even as they complain about the intensity and the AI brain fry, they do think this is the future. And I think one of the things about AI is that it is very jagged. So there are coding tasks where there's a lot of open-source data that the models are trained on, that they're very good at.

But there's a lot of coding tasks either because of the domain or the language that they're not good at. And another thing is they're not very good at interacting on a systems level, which is a big deal for these organizations that run massive code bases. So it's not clear yet whether there's going to be changes, improvements in these models that make them better.

But I think what we're going to see is changes in business practice and how software is made in reaction to these tools.

CHAKRABARTI: Okay. In one of your articles though, you wrote about how there was one company that was doing this, they were monitoring engineers or their engineers use of AI and their output, and they found that people who really good at using the AI were actually twice as productive as regular engineers.

So was it worth the cost for them?

FERNHOLZ: There is always a but there and the but is, it was at 10 times the cost of tokens used by other employees who were not tokenmaxxing as much. And so two times productivity. That sounds really cool. 10 times the cost is a problem from a business point of view.

And so even as we try and figure out sort of employee metrics, at the end of the day, the real measure of this is going to be accounting. And how do companies show a profit from these investments and doing it at 10 times the cost probably is not going to be that path through. We're already seeing companies, big companies say, oh, this is 10% of our software budget, using these AI tools. And that is one reason that they're looking at layoffs so much, even though we haven't quite proven out the actual gains from this technique.

CHAKRABARTI: Okay. Brian, I should have asked this question earlier, but many people are concerned that the more that AI is being used, one of the things that's happening is it's being trained to do the jobs that the people who are using it now are doing. Is that a possibility here with this tokenmaxxing culture?

ELLIOTT: It absolutely is. Or at least that's the desire in some places. If you look at Meta is the one that announced last week, the 10% layoff on their staff. The news also broke last week that Meta was installing software on their workers' laptops to track mouse clicks and keystrokes in part to better train AI models to do more complex work.

You put those two things together and people start getting concerned about jobs and layoffs. Tim touched on this; it's the big tech firms that are letting people off the fastest. And what they're generally saying is that it's the spending on AI that's going up. That means that in order to hit quarterly earnings, they need to actually cut back someplace else.

And what they're cutting back on is headcount. But that message of AI is here to replace your job is a bit of a contagion. There's really good research out there that says though, that's not, that will be the impact in some areas, but not others. The more that your job is really just a set of individual tasks like customer service, the more risk you actually face.

In other areas like engineering, specifically, the message is not so clear. In fact, if you look at job postings for the past six months, we're actually at a three year high in job postings for engineers. So that would surprise a lot of people. But what's happening is these engineers are becoming more productive. Because the combination of a good engineer who's a good problem solver, who's a good systems thinker, who can actually work well with a product manager and a designer is now worth more, because they can get more done in a day by using the tools to do the coding aspects of it.

Which it turns out, if you talk to most engineers, it's only about 20 to 25% of what they actually do. Places like that, like engineering, that have always been supply constrained, have the potential to grow. And that's really important as we think about what the messaging is overall.

The more that I think some of the big AI providers tie AI and job losses together in the narrative, the more people are actually going to resist when instead in a lot of places, this actually does have tremendous potential to make people's jobs better.

CHAKRABARTI: I'm glad you said that, because in general, a really great tool does help people do their work better, right? It unlocks potential, as you said, that wasn't there before, no matter what the job is. But this is why AI is so complicated, right? Because it is a transformative tool that will change kind of civilization as we know it for better or for worse, or for both.

But at the base of this, Brian, you said a little earlier that lots of companies are feeling pressure from their boards or leaders are feeling pressure from their boards to use AI. So as we have greater AI adoption, outside of the software engineering world, pressure to use these tools might actually fall on many more people across the country.

But none of that gets to something which is fundamentally human. And Brian, I know you've also what led teams in your previous life at Google, Slack, Salesforce, et cetera. So what would be in your mind, like the best way to introduce AI tools in a way that actually makes a team do better work?

What are the fundamentals there we should be thinking of?

ELLIOTT: The magic word in all of that is team. So instead of thinking about this as an individual-by-individual type of activity, I've worked with enough companies too over the past couple of years in terms of AI adoption plus my own experience. To know that if you actually put in place time for a team to learn together and have teams have goals that are actually, they're judged on the basis of whether or not, what outcomes and results they create. You get a lot further, a lot faster. The example here is pretty straightforward, like a team that gets a couple of hours a week where they're just experimenting together on using AI. Where they're trying this out together, does a couple of things.

Number one, it opens up a bunch of questions about what's okay in terms of how we use this? What are our norms for it? Is it okay for me to use AI on my performance evaluation? For example. Is it okay for me to use it to draft response to a customer versus a draft response to a coworker? That starts getting people more comfortable with using the tools in the first place.

The second is if a team is actually judged on the basis of what they do as results. What outcomes they create. Then the incentives are a lot more aligned. There's a couple of firms that I've worked with that have the line, they look for AI champions, meaning people who are really good and deep at this, and they know that if there's a champion embedded in a team, that team's going to get a lot further, a lot faster.

So it's not an individual game. It's a team-based game because some people will adopt this more quickly. They'll become the sort of AI experts, the builders with the tools, and do things that make other people's work better. So if we can shift our locus to like from individuals to teams, I think we get a lot further.

In ways that are more productive.

CHAKRABARTI: Point totally well taken. But I'm thinking of the siren song of things like tokenmaxing is that it's instantaneous measurement, right? So in order to create the kind of work culture you're talking about, we have to come up with different kinds of measures, right? And also maybe somehow create a culture of a little bit more, of more patience amongst company leaders.

ELLIOTT: Yeah, and the patience doesn't have to be huge. By the way. I worked with a couple of leaders who basically said two hours a week for a month will just be enough to get you started.

And that's really the most basic core, give people a little bit of time to do this. People are more concerned about having the time than they are being laid off. In fact, in some of the research. The harder job, as you noted Meghna is getting alignment at leadership level about what are the clear goals, what are our clear priorities as an organization to get work done?

That does take hard work, but it's the thing that actually unlocks the potential for this in the first place. It's what we've been talking about all along. It's what Tim noted earlier on, which is just because you're creating new features or new lines of code doesn't necessarily mean you're even creating something customers want and are going to pay for, but getting lined up around like, how are we going to actually measure success as a team?

Is the most important thing that you can unlock anyway. And if you really want to get the most out of these tools, you're going to have to put in the hard work of having those conversations.

The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.

This program aired on April 28, 2026.

Headshot of Claire Donnelly
Claire Donnelly Producer, On Point

Claire Donnelly is a producer at On Point.

More…
Headshot of Meghna Chakrabarti
Meghna Chakrabarti Host, On Point

Meghna Chakrabarti is the host of On Point.

More…

Support WBUR

Support WBUR

Listen Live