Naresh Shan Logo

February 9, 2026

The Real AI Risk Isn't the Robot. It's the Price Drop in tokens.

The Real AI Risk Isn't the Robot. It's the Price Drop in tokens.

Most people are scared of the wrong thing.

When the conversation turns to AI danger, it gravitates towards familiar territory: sentient machines, mass job displacement, autonomous weapons, existential risk. These are good narratives. They have clear villains and dramatic stakes. They also largely miss what is actually happening.

The thing worth being concerned about is not what AI can do. It is what happens when running AI costs nearly nothing.

In March 2023, accessing GPT-4 cost $30 per million input tokens and $60 per million output tokens. By mid-2024, GPT-4o Mini had reduced that to $0.15 and $0.60 respectively. DeepSeek R1, released by a Chinese research lab, undercut the entire market at $0.55 per million tokens while delivering near-frontier reasoning capability. According to Epoch AI, achieving GPT-3.5-equivalent performance became 280 times cheaper between November 2022 and October 2024.

Andreessen Horowitz has documented the inference cost decline at approximately 10x per year — faster than Moore's Law and faster than the decline in internet bandwidth costs during the early web era. They called this "LLMflation."

This is not a projected trend. It has already happened. The cost floor has already moved. And the consequences of that move are only beginning to land.

William Stanley Jevons identified this dynamic in 1865 when he noticed that making steam engines more efficient did not reduce coal consumption. It increased it, because cheaper operation made previously uneconomical applications worth running. The same mechanic applies to AI.

Even as per-token costs collapsed, average monthly AI spending across organisations rose 36%, and the share of companies spending more than $100,000 per month on AI doubled. Agentic tasks and multi-step reasoning chains consume upwards of 100 times more tokens than simple queries. The cheaper each token becomes, the more tokens get burned.

This matters because it means the transition is not gradual. It is not a slow substitution of human work by machine work. It is a step change that occurs when the economic friction that previously kept humans in the loop disappears.

The concern is not that AI becomes powerful. The concern is that AI becomes ubiquitous because it becomes free.

Content at industrial scale becomes trivial. When generating a thousand articles, product descriptions, legal summaries, or social media posts costs a fraction of a penny, the economics of every information industry shift. The throttle on volume was always cost. Remove the throttle and volume becomes essentially unlimited.

Automated decision-making stops being a premium product. Hiring screening, loan assessment, medical triage, benefits administration — these processes have kept humans in the loop partly because of genuine regulatory requirements, but also because automation had a cost. As that cost approaches zero, the case for human review weakens not because anyone decided to remove it, but because the economic argument for keeping it evaporates.

Manipulation becomes industrialised. Political influence operations, social engineering, fraud, and targeted disinformation are not new problems. But cost has always been a natural throttle on their scale. A campaign that required a hundred people to run becomes a campaign that requires a budget line and an API key.

Power concentrates at the infrastructure layer. Training a frontier AI model costs hundreds of millions of dollars. Using one costs a fraction of a cent. This creates a specific and underappreciated asymmetry: the ability to build AI consolidates within a small number of well-capitalised organisations, while the ability to deploy AI becomes universal. The few who control the infrastructure determine the rules. The many who consume it do not.

The sci-fi risk narrative requires AI to have agency. To want something. To choose to harm. This is a useful story because it has a clear protagonist and antagonist, and because it suggests that alignment research is the key variable.

The economic risk requires none of that. It does not require AI to be sentient, or to have goals, or to defect from human instructions. It only requires AI to be cheap enough that the organisations and individuals deploying it no longer have a financial reason to include humans in the process.

The robots do not need to decide to take your job. They just need to make doing your job cost a fraction of a cent.

The questions that matter are not "is this AI smarter than me?" They are: what happens to the value of my cognitive output when producing an equivalent output costs nothing? Who controls the infrastructure on which all of this runs? What governance structures exist when the cost of deploying AI at scale drops below the threshold of meaningful decision-making?

OpenAI was reportedly on track to lose $5 billion in 2024. Anthropic expected to be $2.7 billion in the red by 2025. Prices this low are not sustainable without subsidy or consolidation. The current pricing is a land-grab. When consolidation happens and investor pressure for profitability forces a correction, whoever controls the infrastructure at that moment holds the leverage.

That is the risk worth watching. Not whether AI becomes conscious. Whether it becomes cheap enough that no one notices when it replaces the human in the loop, and powerful enough that the people who built it can name their price when the market matures.