Insight

Nobody Is Talking About the AI Exponential

The weekly AI discourse focuses on individual model releases while missing the exponential curve underneath, and human cognition is wired to underestimate it.

Damien Healy·
Nobody Is Talking About the AI Exponential

Every week, the AI conversation follows the same pattern. A new model drops. Benchmarks are published. Commentators weigh in on what it can and can't do. Someone declares it overhyped. Someone else declares it transformative. Then the next release arrives and the cycle repeats.

It's all real. It's all interesting. And it almost entirely misses the point.

What nobody is discussing is the long-term curve. Not what AI can do this week, but where the trajectory has come from and where it's going. The trend that makes each individual announcement look small. The thing that, once you actually see it, makes the weekly discourse feel like arguing about individual waves while a tide is coming in.

I think I know why. Humans genuinely cannot process exponential thinking intuitively. It's not a knowledge gap. It's a cognitive one. And it probably explains why even the sharpest commentators keep covering the latest release rather than the underlying pattern. They're as susceptible to this bias as everyone else. So am I, when I'm not paying attention.

This article is about the curve.


When I was a kid, our family got its first home computer. It had 256 kilobytes of RAM. No hard drive. It was remarkable at the time.

That machine had roughly 29,000 transistors inside it. A high-end GPU today contains 92 billion. Same underlying idea. Same basic principle. Just relentlessly doubled, over and over, for fifty years.

Here's the thing, though. Nobody noticed. Not really. Because for most of that journey, the doublings didn't feel like much. You got a faster spreadsheet. A slightly better game. Each step forward was interesting but not world-altering. The base was too small for the doublings to matter at human scale.

That's no longer true.


There's an old puzzle about a chessboard and grains of rice. Place one grain on the first square, two on the second, four on the third, and so on. By the halfway point, you have about four billion grains. Interesting, but manageable. Then you keep going. By the time you reach the final square, you have more rice than has ever been grown in human history. The rules don't change. The doubling doesn't change. But somewhere past the halfway point, the numbers stop being numbers and start being incomprehensible.

We are past the halfway point.

Consider what's happened just in AI hardware. In 2018, the Summit supercomputer was the most powerful machine on earth. It occupied a space the size of a football field, consumed 13 megawatts of power, and cost hundreds of millions of dollars to build. A single NVIDIA GB200 rack available today delivers roughly 70 percent of that performance. One rack. The capability that required a national supercomputing facility six years ago now ships in a cabinet.

Or consider the compute used to train frontier AI models. From 2010 to 2024, the training compute for leading AI systems grew at roughly four to five times per year. Not four to five percent. Four to five times. Every year. Compounded over a decade, that's improvements of around a million times. That's not a better version of the same thing. That's a categorically different thing.


The research on exponential growth bias is striking. When people are asked to estimate the outcome of an exponential growth scenario, roughly 65 percent underestimate the result, even when given the starting point and the growth rate. Even when they're told to be careful. Even when they've heard about exponential growth before. The bias runs deep because it's not a knowledge problem. We evolved to survive in environments where tomorrow looked like today. Our brains are genuinely good at linear extrapolation and genuinely poor at exponential reasoning.

So when most people look at AI and see something impressive but manageable, something that's improving gradually and can be engaged with at some comfortable future point, they're not being stupid. They're being human. They're applying the only mental model they have. And that model is wrong for this moment.


The investment numbers make the curve concrete. The five largest technology companies have announced combined capital expenditure of between $660 billion and $690 billion for 2026 alone. Not over a decade. In a single year. Amazon is spending $200 billion. Alphabet between $175 billion and $185 billion. Meta between $115 billion and $135 billion. Global data centre power consumption is projected to grow 165 percent by 2030. The industry is committing $3 trillion to build the infrastructure to support it. This is a civilisation-scale construction project.

These are not bets being made cautiously by people who think AI is gradually getting a bit better. These are the largest capital commitments in the history of corporate technology, made by people who are closest to the curve and understand where it goes.

The rice isn't running out. We're just getting to the second half of the board.


I've spent 25 years leading operations and technology across many businesses. I've seen technologies arrive with great fanfare and land with a thud. I've also watched the ones that genuinely changed everything. The pattern is always similar. Long, slow early progress. A period where observers conclude it's overhyped. Then an inflection. Then adoption so fast that the organisations that waited never quite catch up.

AI already achieved 50 percent adoption among knowledge workers in roughly 36 months after ChatGPT's launch. The fastest technology adoption ever recorded. And most people using it are still on the flat part of the value curve, using it to polish emails and summarise documents. The compounding hasn't arrived for them yet because they haven't changed how they work. They've added AI to old patterns. They haven't rebuilt around AI.

The doublings ahead are going to be the ones that matter. The base is no longer small.

Knowing about a bias and correcting for it are different things. The correction here is simple, even if it's uncomfortable. Stop waiting for the moment it feels undeniable. By then, you'll be looking at the second half of the board from the wrong side.

Your move, human.


Damien Healy is the founder of Qanara, an Australian AI consultancy helping businesses accelerate from strategy to impact. He writes about AI-native workflows, frontier AI capabilities, and practical transformation.

More Research & Insights

Explore more of our original research and practical insights.

View all research