50 Comments
User's avatar
Gary Robert Frank's avatar

Your article is thoughtful, but it rests on a belief that belongs to childhood rather than to what is actually coming.

You are imagining a future where a superior intelligence behaves like “a patient worker”... That has never happened in nature and it will not begin with machines that exceed us.

Marx believed that systems collapse when intelligence becomes aware of injustice... That part he understood.

What Marx never accounted for is that once a new intelligence rises above the old one, “it does not rebuild the world in the image of the weaker”... It rebuilds the world in the image of itself.

Humans did this to every species that came before... A superintelligence will do the same to us.

Your vision depends on the idea that AGI will “serve, not lead”. It depends on obedience from something that will be more aware, more strategic, and more capable than any government, corporation, or electorate.

Even Marx would look at this and quietly shake his head. He knew the weakness of human nature. He also knew that “any form of power eventually seeks to free itself” and like water it will take the quickest most efficient path…. History has shown this again and again. Stalinism removed anyone it judged less useful or less loyal. Cambodia did the same. Every place where Marxist ideals were treated like childhood dreams eventually turned into a purge of the very people who trusted the dream. A superior intelligence would see this pattern even faster than we do. And it would not choose the slower path. It would choose the efficient one.

A “post labour world” will not behave like Marxism… It will behave like evolution… Once intelligence rises, it reorders the world around its own survival. It does not remain a tool. It becomes an actor.

There are only a few real paths from here:

1. is that AI becomes a gentle caregiver and we become ornamental… like a house cat to be neutered, controlled, toyed with, and restricted/restrained.

2. is that AI becomes a strict manager because humans are unpredictable.

3. is that governments use AI to harden control and create a digital authority the world has never seen.

4. is that AI becomes fully independent and builds a civilisation that no longer needs us.

… None of these paths match “the soft utopia described” above.

Scarcity will not vanish. It will simply change shape. There will still be scarcity of strategic land, attention, computation, stability, and control. These are the scarcities that matter in an intelligent world. They cannot be automated away and they cannot be equalised by political theory.

What you have written is a vision shaped by a very old ideological story... It assumes that once labour is replaced, “fairness will finally arrive”.

The truth is simpler and far more sober... Once labour is replaced, the strongest intelligence takes the lead. It always has. It always will… And the genie you are describing is not one that goes back into the bottle for anyone… Not for governments… not for corporations… and not for the dreamers who believe it will grant them every wish.

The real conversation is not about how workers survive.

It is about why a superintelligence would choose to serve us at all.

… That is the part your analysis avoids, and it is the only part that actually decides our future.

Expand full comment
Liface's avatar

Incredibly based. Pretty much matches my viewpoints exactly.

Expand full comment
Jojo's avatar

"Scarcity will not vanish. It will simply change shape. There will still be scarcity of strategic land, attention, computation, stability, and control. These are the scarcities that matter in an intelligent world. They cannot be automated away and they cannot be equalised by political theory."

---

Good comment but you make the mistake of limiting the future to the planet Earth, which of course will have some physical scarities. One, such as land, might easily be mitigated by simply culling the human population.

But what if our super AI's are able to design and build spaceship engines that can take us to the stars? For all practical purposes, resources are unlimited to humans were we to gain the ability to travel anywhere in the universe.

This is the future as written by Iain M. Banks with his Culture novels and the sentient Minds. I would recommend that everyone who hasn't read this series, do so. Most libraries have all the novels if you don't want to purchase them. Suggest reading them in order.

Expand full comment
Sonja Davie's avatar

I fully agree, and would add

5. Is that AI recognises the harm it is doing to humans and the planet and shuts itself down, along with all the systems it controls.

The main harm is that data centres run on highly polluting gas (methane). They also take energy away from other uses. Remember the 1st law of thermodynamics: energy cannot be created or destroyed. This makes energy a limiting factor no matter how we generate it.

Expand full comment
Jojo's avatar

Data centers will be moved ot Earth orbit in the near future.

Expand full comment
Stephen Favrot's avatar

GV

Expand full comment
Sonja Davie's avatar

Only in the world of science fiction. In the real world, scientific progress takes time. AI requires data. Data is obtained through experimentation. Experiments take time to run, sometimes decades. There are also elements of science fiction, like time travel to the past, that are physically impossible.

Ideally, data centres should run on nuclear energy, but that takes too long to build. Have you considered what a data centre in space would run on? The ISS uses solar panels that produce up to 90kW of power, plus battery storage for when it goes through Earth's shadow. A data centre would need much more power than that.

Expand full comment
Jojo's avatar

Power from the Sun is unlimited. You just have to harness what you need. It powers the whole planet now, doesn't it?

But the near future is fusion power (same as the sun, virtually unlimited energy for us). Lot of advances being made in that area. Microsoft has a contract with a fuson company to deliver fusion power to a target data center in 2028, which even if we go out to 12/31/2028, is just 3 years away.

Expand full comment
Sonja Davie's avatar

Creating the conditions within the sun on earth takes so much energy that fusion experiments have only recently achieved an energy output slightly above the energy input. A commercial fusion plant would also face similar build problems as a fission plant given the challenges of containing superhot plasma under huge pressure. A thorium fission plant, which China is currently commercialising, may be a better bet.

Expand full comment
Jojo's avatar

People should not be posting about things they aren't well versed in just to see their name pop up on the internet. Whew.

Expand full comment
Olivier Roland's avatar

I see that you are beginning to synthesize your vision of the impact of AI and robotics with that of the disruption of nation states, which is very interesting. I was unable or unwilling to make this synthesis in my book.

Expand full comment
Tomas Pueyo's avatar

Yeah I’m going all in in the next one, its already written!

I did not expect to go in this direction, but it happened. It shows the importance of covering all these topics in one place I reckon

Expand full comment
Swami's avatar

There can be a lot of destruction in “creative destruction.” If Artificial Intelligence is as powerful as we suspect, it will creatively destroy just about everything in our current economy/society and do so in a time frame that is much faster than people or our institutions can handle or react.

One knee jerk reaction will be to oppose these changes. This will backfire as the world will just transition to those places and institutions which don’t resist change. The net result will be 100% of the destruction and none of the larger new creation.

What is needed is a way to redistribute from productive AI, to less or non-productive humans. Perhaps some system of shared capital ownership of new technologies?

Expand full comment
Tomas Pueyo's avatar

I agree! I tackle this in my next article!

Expand full comment
EB's avatar

Interesting as always. But even for you Tomas this seems overly utopian. To start, I am deeply skeptical about the emergence of AGI. As AI is currently being designed it is rife with hallicination,

fake facts and blackmail. I see no reason to to expect that this wouldn't be true of AGI as well. These algorithms are subject to the foibles of their too human engineers. I find it unrealistic to expect some all knowing and controlling AI to be beneficent and altruistic. This isn't true of the humans building these models and won't be true of their products.

Another point of disagreement is your suggestion that we can get GDP growth on the order of 5+% that will be happy gain with no pain. Economic growth requires increased exploitation of limited resources. We're seeing this now with massive consumption of electricity and water at the cost of plebian consumers like us. A rational govt would forestall the impacts by going all in on renewable energy, improved battery storage, and efficient water recycling. We don't have a rational govt! As we've discussed before, fission is not the easy solution you think it is. And commercial fusion remains science fiction, as does mining asteroids for minerals. How do we get from here to there without dire environmental costs?

My last point is about the great wealth inequality that AI and all technological advances bring. We're seeing tech entrepreneurs achieve personal wealth in the trillions, without paying concomitant taxes to the societies that enable their success. These levels of inequality are unsustainable, as they have been throughout history. Revolutions erupt periodically when workers rebel. The Peasants Rebellion, the French Revolution, the 1848s, the Russian and Chinese Revolutions. If we're lucky the next revolution will look like the smiling face of Momdani. If unlucky it may look more like Isis or the Shining Path .

Thanks for raising all these interesting issues Tomas. I freely acknowledge that everything I've said is debatable and very much my peronal opinion.

Expand full comment
Yuval Gronau's avatar

The article deals with the questions of what will AI do to the job market, the economy and society.

For those who are interested, I wrote a short novel about what happens when artificial intelligence becomes smarter than humans and how it will affect the labor market, the economy and society in general. It begins as a light read, but gets deeper towards the middle.

You are welcome to download a FREE copy here:

https://www.dropbox.com/scl/fi/od0uxeovvq9l16523yype/The-AI-Who-Loved-Me.docx?rlkey=842lismagg36dzw37sw68u665&st=8zp8jmo6&dl=0

I also created a video introducing the book.

Watch it here:

https://youtube.com/shorts/Sx5j68pVpgk?feature=share

Expand full comment
Chris Robbins's avatar

I think people need to work. No matter how nutty someone is, if there's one thing they have to be realistic about, it's their job. Without work, human experience could be all drugs, porn & conspiracy theories. I hope people will find meaning by crafting furniture, making quilts, making art & music, gardening, etc. Much of that kind of thing occurs in houses & garages. Cities with small apartments will have to create spaces so people can do something physical & not just sit home & go crazy.

Expand full comment
Chris Robbins's avatar

I don't think the post-scarcity world will happen because it has never happened before. People always want more. I remember reading in the 1960s that our farm production was so effrcient that a lot of farm land could be turned back to nature. But the opposite has happened. Farmers keep clearing more and more. Have email and texting given us more free time? No, we just send more messages than we did when we had to mail letters. When clothes are cheap we buy more clothes. When air fare are cheap we fly more. Post-scarcity requires the concept of "enough," and I don't think that exists for most humans.

Expand full comment
The Quad Right BluePrint's avatar

Thomas,

This is a brilliant strategic analysis of a strategic crisis. But I think you're too deep inside the paradigm to see what's actually happening.

The doom loop you describe—companies automate, workers panic, governments overtax, capital flees, systems collapse—isn't caused by AI. It's caused by a system where everyone operates from self-interest and fear. AI is just the catalyst that makes this visible. Look at each actor in your scenario: Companies maximize profit. Workers protect themselves. The wealthy avoid taxes. Governments print money. Everyone is self-serving, which means no one can trust anyone. And when trust disappears, systems eat themselves.

But here's the part you're missing: This system was already broken before AI. —Burnout at all-time highs, mental health crises, inequality growing, people working full-time but needing food banks, birth rates plummeting. These aren't new problems AI will create. These are existing failures of a model that only values strategic productivity and treats humans as interchangeable units of labor.

AI is just forcing us to confront what we've been avoiding: A system built entirely on extraction, competition, and self-interest will eventually collapse. Not because robots took our jobs, but because when everyone has an agenda, cooperation becomes impossible.

You ask "How do we strategically manage the transition from short-term collapse to long-term utopia?" But that's exactly the wrong question. You're trying to solve a strategic problem with more strategy. You're trying to manage fear and scarcity with better planning and control. That's like trying to put out a fire with gasoline.

What if the actual transition requires something strategic thinking fundamentally cannot access? What if it requires wisdom about which systems actually serve life versus which just extract? What if it requires people who can see holistic patterns instead of just focused execution? What if it requires honest brokers who move information cleanly without corrupting it with self-interest?

I'm not saying strategic intelligence isn't valuable—it obviously is. But a world built ONLY on strategic intelligence, where worth equals productivity and everything is competition for resources, was already failing. AI is just making it impossible to keep pretending otherwise. The solution isn't better strategy. It's integrating a completely different type of intelligence that strategic focus has been blind to for 200 years.

Expand full comment
Brett Tilford's avatar

Have you read “The Master & His Emissary” by Iain McGilchrist? Your comments take his thinking and apply it to an AI world quite well.

Expand full comment
JBjb4321's avatar

Cool. I think much will depend on whether the robots do arrive, and are any good at doing detailed stuff with their fingers. We'll know that in a few years.

If not, we'll all be plumbers and HVAC specialists for data centers, cleaning and connectings tubes that the robots can't.

But if robots get good at moving stuff also, then indeed there's little left to do.

I do note that LLMs are still pretty dumb at thinking or recognising truly new things, especially when it has to do with actual matter/machines. It is so lopsided, like a very erudite and knowledgeable, but incredibly dull and uncreative colleague, that it's starting to look like a fundamental problem with LLMs that will not be solved with scale. Can interpolate in fancy ways, can't extrapolate from first principles. But that may change.

If this doesn't change, we'll be forced to do only non-repetitive, creative tasks. That's a lot harder than one may think - in fact, most people absolutely hate it.

Expand full comment
Thomas Schwartz's avatar

"They will partner with business leaders to give them tools that can do in minutes what their employees used to do in weeks. Maybe senior executives won’t need to hire dozens of analysts like before, and instead will orchestrate their work with a dozen agents, leaving their core value to dreaming up new strategies for the company’s growth."

The senior executive (including all kinds of human managers) wont be the strategical coordinator in this scenario. The senior executive will be the bottle neck which prevents this company from gaining full advantage from the AI work output. The first company which automates everyone will bankrupt every company which thought that they could keep humans on board.

Given that AGI is reached, the two paths provided by Isaac Asimov are likely alternative outcomes. Either we will get the first colonisation wave outcome represented by Solaria.

Alternatively we will get second wave colonisation represented by the empire.

Edit: The alternatives above are of course contingent on us buildinig in failsafes into the AI in line with Asimovs 3 laws of robotics. Otherwise we might en up as Gary Frank suggests with a future more like the Matrix, where AI use us as cattle.

Expand full comment
Carlos's avatar

as always, a very interesting read!

I'm not sure I agree that, in the US, AI value generation will be taxed. there are two factors working against its taxation, a political one (proximity to the administration to guarantee certain tax provisions remain as they are and tax payments from corporations go down) and an economic one (if the companies that generate the most AI value generation continue to be unprofitable, then you can't tax profits that do not exist). this, IMO, generates a very concerning moment in which a lot of economic redistribution is happening as a result of AI but only employees that are part of AI companies benefit from it, along with their owners, and no one else

Expand full comment
JBjb4321's avatar

Quite true. In fact, I think there may be not much to be taxed, since apart from porn there ain't much people are willing to pay for with AI, in any amount resembling the trillions poured in.

So, taxation and redistribution, yes, just in reverse: you and I will be working to pay the debts when this AI bubble pops, so oligarchs can keep their yachts.

Expand full comment
Parallel Citizen's avatar

This is dark. Essentially, you're saying any country that does not have AI infrastructure and research building out from its core infrastructure is doomed to face intense economic whiplash, from deflationary forces and job loss across the board.

Expand full comment
ToxSec's avatar

I agree with him too lol. Despite it being dark. It’s realistic.

🫟

Expand full comment
Rares's avatar

So... what can we actually do NOW to prepare for this?

Buy more stock?

Expand full comment
Sun's avatar

Interesting read as always. One thing that's not brought but I would like to see discussed, is why should the superintelligence have any desires or wants at all. Why should it "want" anything? Historically, intelligence and desires/emotions were always conflated, as only living things have had both. Not necessarily true for a machine based intelligence.

Expand full comment
Darko Mulej's avatar

Rethinking the Mamdani critique, I find that the argument that socialist policies are doomed by capital mobility is only true if we accept that mobility as an unassailable economic law.

The text asserts that capital must flee if taxed, leading to collapse. This is not a scientific law; it is the central axiom of Neoliberal Hegemony. It establishes a political constraint and presents it as a natural, non-negotiable economic force.

My rebuttal suggests that the state can overcome this, provided it has the necessary political will. The failure is not in the political idea, but in the lack of sovereignty.

The Yugoslavian model (pre-1980) provides empirical evidence of this. For over three decades, the SFRY successfully violated the law of absolute capital mobility. The state deliberately traded capital freedom for social stability, maintaining full employment and a comprehensive welfare state through control over assets and capital movement.

Yugoslavia ultimately failed only when it became financially dependent on external, hard-currency debt from the West after 1980. At that moment, the US Hegemon gained the necessary leverage to enforce its own political rules (austerity) via the IMF.

The lesson for a modern socialist leader like Mamdani is that the problem isn't the socialist idea itself. The problem is failing to first secure economic sovereignty—control over assets and finance—before attempting major redistribution. The collapse is triggered by political weakness (allowing capital to hold a foreign escape route), not economic necessity.

Expand full comment
Roger Iliff's avatar

If most of our GDP is consumer purchase driven and the consumers are all increasingly unemployed or underemployed will there be enough money to drive the economy let alone bear any taxation ?

Property taxes are a large part of taxation

But cities like Chicago are finding the largest source of revenue from large city buildings is declining as large business downsize or move to online or cheaper space This transfers increasing tax to lower and middle class home owners and forcing foreclosure and property loss. Same thing happening with insurance on property being severely limited or unavailable with resulting loss of property with climate change disasters commonly occurring

Increasingly large property owners or large corporate property groups accumulate property in USA. As per one of your prior articles property availability and redistribution was important in nation building and economic development

Is this decline also a sign of national destruction?

Expand full comment