Jensen’s not telling the whole story about AI Tokenomics
It’s a great pitch. It’s also only one third of the whole story.
Jensen Huang made “tokenomics” one of his signature words. At GTC 2026 he gave us a formula: Revenue = Tokens per Watt × Available Gigawatts. He pitched “tokens for employee compensation”. And he described Nvidia’s AI infrastructure as “factories that produce tokens”.
There’s no real surprises there as that logically supports Nvidia’s business.
But enterprise analysts pushed back almost immediately. Larry Dignan at Constellation Research wrote what a lot of CIOs were already thinking:
“Our company doesn’t sell tokens.”
JPMorgan Chase sells financial services. Walmart sells retail. GM sells cars. For these companies, tokens are a cost - closer to raw materials in manufacturing than to the finished product. What their CIOs want is cheaper inference, better ROI, and a clear answer on when all their AI spending will start paying for itself.
Their perspective is valid, but it’s different from Jensen’s. But even when you combine them that’s still only two thirds of the story.
There’s another perspective. One more story that neither of them are talking about. If you earn your living as a knowledge worker, then this is the one that will matter the most to you.
The structural model
Mark Pesce’s Post-Watershed framework gives us a solid map of how the different pieces of AI Tokenomics all fit together. I wrote about it in detail recently in Are You a Horse?, but the short version is this:
Infrastructure creates tokens - the units of cognition that come out of datacenters, GPUs, and all the systems built on top of them. Harnesses spend those tokens - the tools, agents, workflows, humans and businesses that put this cognition to use. Alpha is the value left over, once you account for what you spent.
The central insight:
“Alpha can’t be cognitive.”
If you’re betting on more, or better “thinking”, then this framework suggests that you won’t be able to maintain any sustainable advantage at all. The very thing you’re betting on is increasingly mass-produced at near-zero cost.
Four layers: infrastructure, tokens, harnesses, and alpha. Jensen, the enterprise analysts, and you are each standing at a different layer of this same stack. And the view from each place is very different.
The infrastructure perspective
Jensen stands at the centre of the infrastructure layer. His pitch is about the economics of creating tokens: better chips, higher throughput, more efficient production. “Revenue = Tokens per Watt × Available Gigawatts” is a formula for how much money this infrastructure makes.
At GTC 2026 he even took this one step further. He proposed allocating token budgets as a form of compensation for employees.
“I’m going to give them probably half of [their base pay] on top as tokens, because every engineer that has access to tokens will be more productive”.
The irony is that he’s pitching the purchase of machine cognition as a benefit to the very people it will eventually replace.
And then he said the more impactful part:
“I have 42,000 biological employees,
and I’m going to have hundreds of thousands of digital employees.”
Jensen is not wrong about any of this. If you own the infrastructure, then tokens are the product, and more tokens means more revenue. That’s absolutely correct at his layer. His audience are the investors and hyperscalers, and for them this story makes perfect sense.
But by framing tokens as compensation and productivity, he’s only telling the augmentation story without acknowledging where that trajectory takes us.
For many people, right now, these tokens will be a big benefit. But “Hundreds of thousands of digital employees”? That is clearly not just an augmentation story.
The enterprise perspective
In contrast, Dignan and the CIOs are standing at the harness layer. They consume tokens to produce business outcomes. Their question is pretty simple:
“How much does this cost and when do I see a return?”
Dignan’s critique of Jensen is correct. Most companies do not sell tokens. They buy them, and they want to buy them cheaper. This is the harness layer sending the infrastructure layer a clear message:
“Your economics are not my economics.”
That’s correct from their perspective. But it also highlights their focus on the diminishing price of cognition.
And that where your perspective comes in - the one that neither of them is addressing.
The individual’s perspective
I’ve been writing about this from the third position - the token layer itself. Not who creates the tokens, and not who spends them, but what they are. And what they are is a unit of cognitive work. The same cognitive work that we humans used to be the sole provider of.
This is the perspective I explored in Are You a Horse? and that’s an important part of what I’m exploring here in Flux. When you treat these tokens as units of cognitive work, your question stops being about who captures the margin, or what the enterprise TCO looks like. It gets much more personal:
What do I have that infinite tokens can’t reproduce?
This moves you away from thinking about what AI can replace today. If you think about “infinite tokens” then it really makes you think about what the could possibly reproduce. What do you have that has real durable advantage.
Jensen doesn’t need to ask this question. He sells the infrastructure. The CIOs don’t need to ask it either - they’re buyers just optimising their costs.
But if it’s your cognitive work that is being replaced by what this infrastructure produces, and that the enterprises buy - you have no choice but to ask it. And almost nobody in the tokenomics conversation is calling this out.
What few people are saying out loud
When some CIO says “tokens are just a cost centre, not a revenue generator”, they’re also saying something that it seems they haven’t quite realised. They’re literally saying:
“I’m buying machine cognition instead of human cognition, and it’s cheaper.”
That’s not a critique of tokenomics. That’s tokenomics working exactly as Pesce’s framework describes - viewed from the buyer’s side of the equation.
Every enterprise that treats tokens as a cheaper substitute for human cognitive work is just one more data point in the displacement pattern. Right now, they don’t need to sell tokens to benefit. They benefit simply by not hiring the people whose work these tokens replace. This value shows up as headcount reduction, expanded margins, and projects done by three people instead of thirty.
Jensen’s GTC pitch just makes this all more visible. In the one single presentation, he describes token budgets as a perk for his engineers, then he also describes “hundreds of thousands of digital employees” as Nvidia’s future workforce. He’s announcing the substitution, and pitching it as a benefit. He doesn’t need to think about what happens to the people on the other side. From the infrastructure layer, he genuinely doesn’t have to.
Goldman Sachs estimates that AI could automate 25% of all work hours. Howard Marks calls the shift to autonomous AI “what separates a $50 billion market from a multi trillion dollar one.” And when you follow the logic all the way down, they’re likely very conservative. But these numbers also aren’t hiding. They’re already out there for everyone to review. It’s just that they’re not part of the conversation that Jensen and the enterprise analysts are having with each other.
Three conversations, one word
Infrastructure talks to the investors. Enterprise talks to the vendors. And almost nobody in either conversation is talking about you - the knowledge worker.
Jensen sees the infrastructure. The CIOs see their costs. And neither of them needs to look at the token layer itself. At what a “token” actually is, and what it replaces.
We really need to discuss all three perspectives at once. They’re not competing narratives. They’re just different viewpoints into the same model, and the one getting the least attention is the one that affects the most people.


