Meet Tokenmaxxing: The AI Status Game Taking Over Big Tech

The competitive, game-like culture of tokenmaxxing where tech employees compete on internal AI usage leaderboards.

Somewhere inside Meta, there’s a leaderboard that doesn’t track sales closed, bugs fixed or customers retained. It tracks how many AI tokens each engineer has burned through that week. And according to reports, some people are competing very hard to be at the top of it.

Welcome to tokenmaxxing: the new status game taking over big tech, where the goal is to maximise your consumption of AI tokens (the units of text that large language models process), not necessarily because the work demands it, but to signal that you are, emphatically, an AI person.

As reported by the New York Times, one engineer at an AI-first company processed around 210 billion tokens in a single week.

That’s right, 210 billion. That’s roughly enough text to fill Wikipedia dozens of times over. Whether that produced anything useful is, seemingly, beside the point.

It’s funny on the surface, but also worth paying attention to, because it reveals the gap between how companies measure AI adoption and whether that adoption is actually working.

 

What is Tokenmaxxing, Really?

 

The term has spread to companies including Meta, OpenAI and Shopify, where internal dashboards track how many tokens each employee or team uses, turning raw AI consumption into a visible performance metric. According to Gizmodo, some managers are reportedly including token-volume numbers into performance reviews, rewarding heavy users and nudging those who use less.

The official framing is that this is about “driving AI adoption,” and fair enough – there’s a reasonable version of that argument. If you want to build an AI-native culture, you do in fact need people to use the tools. But in practice, as the ChatGPT sycophancy problem demonstrated, when you optimise for how something looks rather than what it produces, you tend to get a lot of impressive-looking activity that doesn’t quite add up to anything.

Generous token budgets have also quietly become a recruiting perk, sitting alongside health benefits and free lunch. Some engineers are reportedly spending thousands of dollars per month on AI agents just to automate as much of their workflow as possible, and to stay competitive on the leaderboard.

According to the Economic Times, aggressive tokenmaxxing is turning what should be a cost-saving tool into a meaningful new corporate line item.

 

The Goodhart’s Law Problem

 

If you’ve spent any time around startups or operations, you’ll recognise this pattern immediately. Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure.

Token usage made sense as a proxy for AI adoption when the question was “are people using these tools at all?” But once it becomes a metric that managers track, report on and reward, it stops measuring genuine productivity and starts measuring the ability to look productive. That’s a very different thing.

Internal AI usage can rise sharply while companies still struggle to translate that activity into clear, measurable gains in engineering velocity or revenue. According to recent data, only 31% of UK firms report positive ROI from AI. That number should give every tokenmaxxing advocate something to think about.

What Smart Founders and Operators Are Pulling From This

 

The real story behind tokenmaxxing is this: the companies doing this aren’t staffed by idiots – they’re some of the most sophisticated operators in the industry. And yet they’ve ended up in a situation where engineers are burning billions of tokens to climb an internal leaderboard of unclear purpose.

That happens because the pressure to appear AI-native is real, and it’s coming from every direction at once: boards, investors, hiring narratives, press coverage. As AI tools become increasingly efficient, the temptation to deploy them everywhere and measure everything is hard to resist.

But the founders and operators who’ll come out of this AI adoption wave ahead are the ones who resist the urge to measure activity and instead measure outcomes. Not how many tokens your team burned this week, but whether the code shipped faster, the customer support got better, the product decisions got smarter.

Those aren’t as entertaining to put on a leaderboard, but they’re the only numbers that actually matter.

 

The Token Budget Arms Race Isn’t Slowing Down

 

None of this means tokenmaxxing is going away any time soon, if anything, it’s likely to spread.

As agentic AI tools become more advanced and are embedded in engineering workflows more, token consumption will naturally increase, and the temptation to treat that increase as evidence of productivity will likely grow with it.

The question for founders and operators is whether the metrics they’re tracking are measuring genuine value or just measuring themselves. Leaderboards are fun, but Goodhart’s Law doesn’t care.

If your team is tokenmaxxing, ask what they’re producing with all those tokens. If the answer is a great position on a dashboard, you might have a problem worth solving before it gets expensive.