now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
TechTalk / Viewpoint
Taxing AI to serve public good
If AI causes mass unemployment or severe fiscal shocks, elected officials and policymakers will need to act fast to limit the disruption. Fortunately, some of the most reliable and powerful options are also the most familiar
Kevin O’Neil   23 Nov 2025

Hardly a day goes by without new headlines about how artificial intelligence ( AI ) is poised to transform the economy. Even if claims that “AI is the new electricity” prove to be exaggerated, we should still prepare for deep change. One of the most powerful and reliable mechanisms for ensuring that AI benefits society is also one of the most familiar: taxation.

What would an AI tax look like in practice? The most practical approach would be to target the key inputs and most tangible metrics of AI development: energy, chips or compute time. The United States already imposes a 15% fee on sales of specific AI chips to China, and though this is technically an export control, it shows how an AI input tax could work. Alternatively, others have suggested changing how we tax capital to account for AI-driven economic shifts. This would be an AI tax in spirit, but broader in form.

The structure of any AI tax would depend on what governments want to achieve. But one thing is clear: the current debate is far more grounded and urgent than it was when Bill Gates raised the idea of a “robot tax” in 2017, echoed later by Bernie Sanders and others.

Of course, some might ask why we should tax AI at all. The answer reflects two fundamentals about tax systems and how AI is changing the economy. First, many countries now tax human workers more heavily than their potential AI competitors in the labour market. In the case of the US, roughly 85% of federal revenue comes from taxing people and their work ( through income and payroll taxes ), while capital and corporate profits are taxed far less. Technologies like AI benefit from favourable treatment in the form of generous write-offs, low corporate rates and carve-outs.

Second, economists expect AI to increase the financial returns to capital relative to labour, even if it doesn’t cause unemployment. The most extreme version of this would entail AI agents that can design, replicate, and manage themselves – meaning that capital would be performing its own labour. Under current tax policies, such a shift would widen inequality and shrink government revenue as a share of GDP.

An AI tax could help level the field between humans and machines. Earlier this year, Anthropic CEO Dario Amodei warned that AI might eliminate half of all entry-level white-collar jobs and push unemployment to 10% to 20% within five years. Whether such forecasts are borne out may depend partly on policy. Taxing labour more heavily than capital tilts the scales toward automation that replaces, rather than augments, human workers. At the very least, we shouldn’t let our tax system help put people out of work.

Moreover, at a time when the fiscal outlook is darkening, an AI tax could protect public revenues from technology-induced shocks. If mass job losses or hiring slowdowns do occur, governments that rely on income and payroll taxes could face fiscal crises even if new AI-ready jobs emerge later.

More optimistically, the right tax policies – combined with an AI-driven productivity boom – could help fix structural fiscal problems. Rich countries are already struggling to fund health care and pensions for aging populations, while poorer countries face an inverse challenge: educating and employing large young populations despite thin tax bases. AI-generated revenue could be part of the solution for both.

Alternatively, revenue could be directed to AI-related causes. Hypothecated taxes, which send revenue back to the sector they come from – like the US gasoline tax that funds highways or the United Kingdom’s television fee that supports the BBC – underscore that the goal is to enhance the public benefits of the taxed technology. An AI tax could do the same: funding grid upgrades, education technology, worker training, open-source AI models, AI safety research or mental health protections

An AI tax could also bolster unemployment insurance and retraining for displaced workers, or even advance broader AI policy goals. For example, it could discourage excess energy use, greenhouse gas emissions, “AI slop” or anticompetitive behaviour, or encourage new energy production and safer models.

Taxing AI may sound politically far-fetched. Policymakers do not want to curb innovation or lose ground in the global AI race. But that reluctance may fade as public awareness matures. If “winning” in AI means having healthier people, happier kids, a more capable workforce and stronger science – not just bigger models or richer companies – an AI tax could help deliver victory.

Nor is such a tax likely to stifle innovation. AI is not a fragile start-up industry. It is a 70-year-old technology that is now backed by the world’s largest corporations, with corporate investment exceeding US$250 billion in 2024 alone. An AI tax could be structured to ensure that it does not impede national security, market competition or research.

In any case, crises can change minds fast. If AI is blamed for mass unemployment or fiscal shocks, elected officials and policymakers across the political spectrum will want to act. Better to prepare good options now than improvise later.

As OpenAI CEO Sam Altman wrote in 2021, “The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.” Altman was speculating about the development of even more advanced artificial general intelligence, but his point already applies: Policy needs to keep pace with technology and anticipate change.

One way or another, AI will reshape our economies and societies. But the results are not predetermined. Whether we get a future where people and communities can thrive will come down to the policies we choose. Taxing AI is not about punishing innovation. It’s about ensuring that the rewards are shared and the risks managed in the public interest. The sooner we start that work, the better prepared we will be able to use AI to create the future we want.

The views expressed in this commentary are those of the author and not of the institution he represents.

Kevin O’Neil is a managing director of new frontiers at the Rockefeller Foundation.

Copyright: Project Syndicate