The case for a token tax
AI could send our economy into a tailspin - here's a way out
Imagine a company that employs one thousand people. One hundred of them work in the marketing team. Some of them write scripts for TV ads, others write Facebook posts, some design billboards or bus shelter posters. A few of them liaise with event organisers and negotiate the price of booths at conferences.
As AI tools improve, the CEO of this imaginary company realises she can deploy a suite of AI agents to carry out many of the tasks that the marketing team are currently doing. These AI agents can write scripts and social media posts and emails, they can negotiate prices and schedule appointments. They can even design billboards and posters.
So the CEO axes half of the marketing team, replacing them with agents, and leaving the remaining humans to spend their time checking the AI’s work.
Over time, as the AI agents improve, they need less and less intervention. So the CEO slashes the marketing team again. Soon, the work of 100 people is done by just a handful. The output has remained exactly the same - maybe it’s even increased - but the cost has been slashed.
She is not paying salaries, insurance schemes, pension payments, National Insurance.
The work of those 100 marketers can be replaced by AI models that not only do it in a fraction of the time, but for a tiny fraction of the cost.
Our imaginary company discovers that there is no longer a correlation between manpower and output, and they begin cutting headcount further.
AI is better at coding than humans, so the engineering team takes a hit. It can draft contracts faster than the lawyers, take more calls than customer service reps, file taxes quicker than accountants. And as the number of humans dwindles, HR becomes obsolete and middle managers have no one to manage.
One thousand soon becomes one hundred, and 900 white collar workers are left holding cardboard boxes in the cold, wondering what the heck just happened.
Now expand this effect across an entire economy.
Companies cut costs by replacing humans with AI agents and see a short-term profit and valuation boost. Their stock prices shoot up as investors see their operating margins widen.
But growing unemployment means nobody is spending any money. There are global recessions. While companies have low costs, their revenues also begin draining - so they have to make further cuts. More people lose their jobs, more unemployment, less spending, more revenues fall.
It’s an unstoppable doom spiral.
The AI Revolution
AI has been compared to the development of the web and the dotcom bubble. But I think it’s far more comparable to the late 19th century and 20th century labour market transformation: first the industrial revolution, and then the pivot in Western economies from manufacturing to services.
AI is not a new medium through which to do business, like the web was, and it’s not a new industry that’s added to the previous slate, like dotcom and apps.
It’s a fundamental upending of every industry and the very nature of production and work.
The dotcom bust led to concentrated job losses and short-lived market upheaval - but the industrial revolution had far-reaching consequences for workers and jobs. It led to the good - the introduction of the 5 day work week, workers’ rights and unions - and the bad - mass unemployment during the Great Depression and political upheaval in Britain during the mine closures.
AI, we can only assume, will do similar. But perhaps even faster and more dramatically.
In this essay, I’m going to set out three things:
Why I think it is an inevitability that AI will replace most knowledge work
How we can cope with the mass unemployment that this transition would create
What taxing ‘intelligence’ might look like, and the practical application of a token tax
I: Smarter AI
AI and the journey to a workerless future
There’s an assumption at the heart of the scenario I outlined in the introduction: that AI can actually replace humans.
But can it? For many years, there was an assumption that manual jobs would one day be replaced by automation - robots would one day come for McDonald’s cooks, warehouse workers and perhaps even high-skilled blue collar workers like painters and plumbers.
The widespread belief was that knowledge work and creative work were safe from automation.
But it is now far more likely that robots will replace white collar writers, designers and lawyers. Unfortunately for the UK, the white collar services economy contributes more than 80% of GDP (as it does across the Western world).
But it is not a foregone conclusion that AI will do knowledge work better than humans. After all, much of this work is a matter of subjective test. You’ll always know if a house has been built properly because it won’t fall down. It’s harder to judge whether a writer has written a good essay.
Some argue that the outputs of AI are always, inevitably ‘slop’: that AI can never match the quality of human writing, animation or design. They suggest that hallucinations - AI’s tendency to invent or imagine facts - will never be overcome.
But that’s copium. These AI naysayers and neo-luddites are the descendants of those that insisted cars would never take off, the printing press was an unnecessary luxury, the Internet was a fad.
AI keeps getting smarter
AI is clearly getting dramatically better with every iteration.
They used to say that AI hallucinated every answer. But OpenAI was the first to develop a ‘thinking’ model, which dramatically improved the quality of AI output. Now, AI models tend to use chain-of-thought reasoning to produce quality, accurate outputs (compared to early models that were next-word predictors).
They used to say that it couldn’t generate quality images. But Google, with Nano Banana, overcame some of the significant hurdles that had plagued previous AI image generators: generating clear text (and complete hands).
Here’s an AI image I generated in January 2025 - inconsistent, illegible text, unclear:
Here’s the same image generated with Google’s Nano Banana 2 model in March 2026:
The trend is the same and it is consistent: AI improves rapidly and exponentially. It is also self-improving: AI models are helping to build and develop new AI models, speeding up the time it takes to develop and train their successors.
AI in its current form can certainly do some of the work previously managed by humans.
Jobs are already disappearing
The statistics bear this out: the number of entry level roles has fallen by a third since ChatGPT launched in November 2022. Youth unemployment now sits at 16% - and rising: one in six young people out of work. There are many causes for this but for anyone who sits in an office all day it is fairly obvious that AI is one of the most significant.
Here are some tasks I had to do recently - that I previously would have farmed out to a junior person in the company, but now ask an AI to do for me. Each takes minutes, rather than days:
Turning a list of names and email addresses into a spreadsheet and looking up job titles, companies and industries
Researching how stock sales are taxed in different European countries and producing a one-page summary
Proof-reading a Powerpoint presentation
Writing the first draft of a LinkedIn post for a CEO
Given the choice and with a tight budget, most entry-level and graduate jobs are now being replaced with AI.
While I could write about what a shame this is and how it is potentially disastrous for the long-term success of the job market, I think it’s more impactful to look at what it suggests will come next.
If AI today can replace a university graduate, and we know that it keeps exponentially improving, it will next replace a mid-level worker. Then it will replace the boss. Eventually, the entire knowledge and services industry will be predominantly AI.
It is, of course, possible that this won’t happen. AI growth could be shunted and stop improving. The costs of operating AI could actually exceed that of hiring a human worker. Planes didn’t keep getting faster and bigger, they seemed to max out. Automated car washes have not replaced hand car washes.
But the direction of travel is clear. We would be crazy not to prepare for the possible, or even plausible, eventuality of a country where the majority of work is done by AI, not humans.
II: Mass-unemployment
The case for UBI and the shift to human-centred work
Hourly, salaried work is not an inevitability. The 9-5 is a relatively recent invention in the history of humanity that came about tied to the Industrial Revolution and then solidified during the Information Age.
In the Feudalist system of the Middle Ages, serfs worked land in exchange for a home, protection and a share of the crops. In Ancient Mesopotamia, workers were paid not with money but with a sustenance of food and drink. There were no accurate clocks until the 17th Century, so work was far more likely to be paid as a day-rate or per-unit rate than an hourly wage until then.
The historical thread is that as value is created, anyone involved in the creation of that value receives a share of it, returned in some equivalent value. (That thread was severed by chattel slavery - which is why it is now viewed as morally repugnant.)
Where does the surplus value go?
As we enter the AI Age, we face a choice: if AI can create extreme surplus value, who gets to share in that value?
The obvious answer is that it is the creator and/or owner of the AI system that gets to keep the value. If I make an AI that sells widgets, I get to keep the profits. This is how we have historically dealt with the value creation from machines. If a robot can make a car, whoever owns the robot gets to profit from the sale of the car.
But AI is different from a machine because of its broad application. A machine can do one job over and over again. An AI can do many different jobs, in different settings, often with very little retraining required.
If AI creates value, and that value simply goes to whoever owns the AI, a very small number of people will get very wealthy - while a very large number of people will become unemployed and very poor.
This will lead to the doom spiral I laid out in the introduction. The owners of the AI systems will rub their hands together for a while, until they realise that the economy depends on consumers, and their own systems have just made consumers extinct.
This is not even a social or moral argument, it is simply a practical one: if AI triggers mass-unemployment, it will send the economy and society into a tailspin. If AI creates a very small number of winners, even those winners will eventually become losers.
How do you deal with that? You force the value created by AI to be shared by all of humanity.
After all, AI has, in a very literal sense, ingested more or less the entire body of work of the entire history of humanity. Every book ever written, every piece of art ever painted, every Reddit post ever made - it all fed into the creation of AI.
If there was ever an invention whose surplus value should be shared not just amongst its creators but amongst the entirety of humanity, it is almost certainly AI.
Universal Basic Income
There is a system to make this work: Universal Basic Income. And it’s not a particularly new idea. Historically, some economists and writers suggested some form of Universal Basic Income, so all citizens could share in the value created from land that, for all of human history, had belonged to everyone and only now belonged to a small number of elites.
In the 40s, the Labour government created the modern ‘welfare state’ - a commitment from the state to care for its people ‘from the cradle to the grave’.
The welfare state and all of its components - including the NHS - are now national treasures, and very difficult for governments of any stripe to reform or even adjust. A Universal Basic Income would represent a full-scale replacement of the welfare state.
Many people baulk at the idea of Universal Basic Income, but we already have a working model of one in Britain - the State Pension. The State pays a dividend to all people over the age of 67. It functions as a reward for the value they previously created, a recognition that pensioners can no longer create value at the same pace as younger workers - but still deserve to reap its benefits, and a floor to ensure all are entitled to a level of dignity and quality of life even without working.
What happens when all of us can no longer create value at the same pace as our AI colleagues? The State should do the same thing: reward us for value previously created, enable us to reap the benefit of new value being created and afford everyone a level of dignity.
Practically, UBI would replace all current means-tested benefits with a single monthly payment that goes to every single adult. The amount should be enough for a quality life, with the option to work still available to those that want either the satisfaction of work or the additional income.
Study after study has shown a counterintuitive effect: people who receive a UBI tend to work more, and produce a higher quality output.
In an AI-driven world, there will be a surge in demand for care-based jobs like teachers and counsellors. These are jobs that, today, require long hours for little pay. A UBI means more people can work fewer hours in exchange for a meaningful uplift in their quality of life.
The historical trendline of centuries of human progress is less, easier work for more and more reward and comfort. Universal Basic Income is the very natural continuation of this trend.
UBI costs a lot of money. While there would be some savings in moving from a means-tested benefits system (that is astronomically expensive - for every £1 that reaches a benefit claimant, 22p is spent on administering that transaction) the cost of giving every citizen a livable salary would far exceed any current welfare expenditure.
But, assess it not against the present but against the future alternative: mass unemployment, where the benefits bill soars at the exact same moment that tax income plummets. In that context, UBI suddenly looks cheap.
III: Taxing tokens
Intelligence as a utility and the surprising simplicity of taxing tokens
AI is, or hopes to become, the commodification of intelligence. The ability to purchase not goods, services or technology - but actual intelligence, historically something that could only be offered via a human.
Technology has always done this: turned things that once occurred naturally into things that can be purchased. Pipes commodified water, power grids commodified heat, machinery commodified labour.
Now AI will commodify intelligence: something that previously occurred naturally, like a hot spring, can now be chopped into units and sold.
The “units” of intelligence are tokens. Tokens are the fuel that powers all AI models.
Everything an AI model does requires some quantity of tokens, and each token costs a fraction of a penny. To write a long novel, an AI model might need 150,000 tokens: a few dollar’s worth.
Everyday users of AI don’t interact with the Token economy because our usage is so astronomically tiny. Just as Google incurs a charge for storing all your photos and emails, there’s a cost to AI companies to let you use their chatbots. The cost is just so small, they absorb it and let you use their products for free.
But if you are a scientist using AI to analyse millions of data points, you might find yourself using billions of tokens. If you’re a CEO replacing your entire team with agents, you will need millions of tokens every day. In those scenarios, the token bill can wrack up surprisingly fast.
Intelligence as a utility and the levers it creates
Sam Altman, CEO of OpenAI, said recently: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”
This is undoubtedly the commercial model for all of the major AI companies - and tokens are the purchasable ‘unit’ of intelligence that anyone - whether it is a user, superuser, huge enterprise client - will buy.
This is fortunate for governments around the world - because it makes taxing AI easy.
A token tax would be a surcharge placed on the cost of one token and applied at the point of sale. At a 10% rate, if a hypothetical token costs £1, there would be a token tax rate of £0.10. The mechanics are very similar to VAT.
And in terms of technological adoption - this paper outlines some of the technical methodology for how governments might calculate and monitor token usage via “black boxes” that become a legal requirement for AI companies.
Tokens are charged differently based on the intelligence of the model, and the number of tokens used is proportionate to the complexity of the task being carried out. A quick back and forth with Gemini will cost fractions of a penny - and the token tax will be minimal. In other words: this tax won’t hit the everyday users of AI.
But it will hit companies that deploy AI in large-scales, the sort of scale that sees the replacement of human workers.
This gives governments two powerful levers to better grip the impact of AI on employment.
Firstly, they can increase the price of using AI models to make it more comparable to the use of human employees. By raising the rate of token tax, you can slow down the pace at which companies replace their human workers with AI agents because the cost-benefit is suddenly reduced.
In other words, you can deliberately slow down the national use of AI by making it artificially more expensive with tax (like taxing cigarettes to deter smokers).
Of course, this is not a sustainable long-term solution. The government should be motivated not just by protecting jobs, but also by maximising productivity and innovation. We shouldn’t discourage scientists from using technology that might discover cures for cancer, just to protect the jobs of the lab interns.
So the second powerful lever a token tax offers is a route to significant government revenue. If AI adoption skyrockets and causes mass unemployment, government will lose a lot of income - revenue from national insurance and income tax will plummet. At the exact same time, they’ll need to spend a lot of money to remedy the unemployment: either on backfoot remedies (like large-scale spending on unemployment benefits), or on frontfoot interventions (like the UBI I have proposed).
A token tax alone will not fully cover the costs of the interventions required, but it represents the sort of thinking that will be required in a future with less, or perhaps no, work.
V: Conclusion
Cars have a corrosive effect on the streets. They damage roads, which require expensive repairs. They pollute the air and cause increased levels of asthma and other health problems. They need to be managed with traffic signals, and occasionally the emergency services have to deal with situations when they’ve gone wrong.
While cars have a lot of benefits, the damage that they do is expensive for the government to deal with.
So we make drivers pay, and we do it in a way that is proportionate to how much they use their car and therefore how responsible they are.
We achieve that through fuel tax.
Tokens are the fuel of AI systems. And when these systems have negative effects, like job displacement, we can use a token tax to go towards reimbursing the public purse for the damage that’s been done.
After all, the impact has the potential to be astronomical.
Our entire state has a dependancy on taxes on income, and our entire economy is dependent on the population’s ability to spend. If income stops, everything grinds to a halt.
Token taxes alone will not cover the full cost of the intervention that will be required.
But we can not bury our heads in the sand to the fact that intervention will be required - and there are only two options: do we react to the crisis when it hits, or do we prepare today by building the framework for an AI-powered future.
As we enter the AI Age, we need to think differently about work, tax and money. Else we sleepwalk into economic collapse.



