Your article is thoughtful, but it rests on a belief that belongs to childhood rather than to what is actually coming.
You are imagining a future where a superior intelligence behaves like “a patient worker”... That has never happened in nature and it will not begin with machines that exceed us.
Marx believed that systems collapse when intelligence becomes aware of injustice... That part he understood.
What Marx never accounted for is that once a new intelligence rises above the old one, “it does not rebuild the world in the image of the weaker”... It rebuilds the world in the image of itself.
Humans did this to every species that came before... A superintelligence will do the same to us.
Your vision depends on the idea that AGI will “serve, not lead”. It depends on obedience from something that will be more aware, more strategic, and more capable than any government, corporation, or electorate.
Even Marx would look at this and quietly shake his head. He knew the weakness of human nature. He also knew that “any form of power eventually seeks to free itself” and like water it will take the quickest most efficient path…. History has shown this again and again. Stalinism removed anyone it judged less useful or less loyal. Cambodia did the same. Every place where Marxist ideals were treated like childhood dreams eventually turned into a purge of the very people who trusted the dream. A superior intelligence would see this pattern even faster than we do. And it would not choose the slower path. It would choose the efficient one.
A “post labour world” will not behave like Marxism… It will behave like evolution… Once intelligence rises, it reorders the world around its own survival. It does not remain a tool. It becomes an actor.
There are only a few real paths from here:
1. is that AI becomes a gentle caregiver and we become ornamental… like a house cat to be neutered, controlled, toyed with, and restricted/restrained.
2. is that AI becomes a strict manager because humans are unpredictable.
3. is that governments use AI to harden control and create a digital authority the world has never seen.
4. is that AI becomes fully independent and builds a civilisation that no longer needs us.
… None of these paths match “the soft utopia described” above.
Scarcity will not vanish. It will simply change shape. There will still be scarcity of strategic land, attention, computation, stability, and control. These are the scarcities that matter in an intelligent world. They cannot be automated away and they cannot be equalised by political theory.
What you have written is a vision shaped by a very old ideological story... It assumes that once labour is replaced, “fairness will finally arrive”.
The truth is simpler and far more sober... Once labour is replaced, the strongest intelligence takes the lead. It always has. It always will… And the genie you are describing is not one that goes back into the bottle for anyone… Not for governments… not for corporations… and not for the dreamers who believe it will grant them every wish.
The real conversation is not about how workers survive.
It is about why a superintelligence would choose to serve us at all.
… That is the part your analysis avoids, and it is the only part that actually decides our future.
I see that you are beginning to synthesize your vision of the impact of AI and robotics with that of the disruption of nation states, which is very interesting. I was unable or unwilling to make this synthesis in my book.
There can be a lot of destruction in “creative destruction.” If Artificial Intelligence is as powerful as we suspect, it will creatively destroy just about everything in our current economy/society and do so in a time frame that is much faster than people or our institutions can handle or react.
One knee jerk reaction will be to oppose these changes. This will backfire as the world will just transition to those places and institutions which don’t resist change. The net result will be 100% of the destruction and none of the larger new creation.
What is needed is a way to redistribute from productive AI, to less or non-productive humans. Perhaps some system of shared capital ownership of new technologies?
Cool. I think much will depend on whether the robots do arrive, and are any good at doing detailed stuff with their fingers. We'll know that in a few years.
If not, we'll all be plumbers and HVAC specialists for data centers, cleaning and connectings tubes that the robots can't.
But if robots get good at moving stuff also, then indeed there's little left to do.
I do note that LLMs are still pretty dumb at thinking or recognising truly new things, especially when it has to do with actual matter/machines. It is so lopsided, like a very erudite and knowledgeable, but incredibly dull and uncreative colleague, that it's starting to look like a fundamental problem with LLMs that will not be solved with scale. Can interpolate in fancy ways, can't extrapolate from first principles. But that may change.
If this doesn't change, we'll be forced to do only non-repetitive, creative tasks. That's a lot harder than one may think - in fact, most people absolutely hate it.
This is very good. The only thing I'd ask you to consider is this will likely happen in two waves. First wave is white collar. Only when robotics comes will it affect blue collar jobs, and robotics is a much tougher nut to crack. It could be 10 years between the two waves.
I'm not sure I agree that, in the US, AI value generation will be taxed. there are two factors working against its taxation, a political one (proximity to the administration to guarantee certain tax provisions remain as they are and tax payments from corporations go down) and an economic one (if the companies that generate the most AI value generation continue to be unprofitable, then you can't tax profits that do not exist). this, IMO, generates a very concerning moment in which a lot of economic redistribution is happening as a result of AI but only employees that are part of AI companies benefit from it, along with their owners, and no one else
Quite true. In fact, I think there may be not much to be taxed, since apart from porn there ain't much people are willing to pay for with AI, in any amount resembling the trillions poured in.
So, taxation and redistribution, yes, just in reverse: you and I will be working to pay the debts when this AI bubble pops, so oligarchs can keep their yachts.
As I read this great overview of the AI revolution and its inevitable transformation of society, I am thinking about the parallel revolution that is going on, the Jihad to convert Western liberal societies to Islamic societies, and how that will play out. The youthful spirit for socialist values succumbs to inherent contradictions (such as multiculturalism) and ends up being dominated by brutal theocracy. See Iran. Traditional Islamic societies without resources to sell to the wealthy Western industrialized world are poor countries. They won’t be in the forefront of innovation and development.
If most of our GDP is consumer purchase driven and the consumers are all increasingly unemployed or underemployed will there be enough money to drive the economy let alone bear any taxation ?
Property taxes are a large part of taxation
But cities like Chicago are finding the largest source of revenue from large city buildings is declining as large business downsize or move to online or cheaper space This transfers increasing tax to lower and middle class home owners and forcing foreclosure and property loss. Same thing happening with insurance on property being severely limited or unavailable with resulting loss of property with climate change disasters commonly occurring
Increasingly large property owners or large corporate property groups accumulate property in USA. As per one of your prior articles property availability and redistribution was important in nation building and economic development
Is this decline also a sign of national destruction?
Inequality and social unrest have been on the rise for a while and I don't think because of technology. If anything, technology provides enough prosperity that - despite inequality - makes people less likely to rebel. The most likely culprit of inequality is shrinking and aging demographics, which makes owning capital and scarce assets (like housing) more important than being productive. Robots have the potential to reverse the demographic issue, so I'm more hopeful than worried.
The comparison between 1848 and 2025 rests on an assumption that patterns of social unrest scale across time. Yet the underlying conditions have changed so profoundly that the analogy struggles to hold. Europe in 1848 had a population of roughly 200 million, limited literacy, slow communication, and no technologies capable of amplifying individual agency beyond immediate physical reach. Collective action was the only available pathway for political disruption.
By contrast, the world of 2025 operates on a very different technological and demographic foundation. Advanced AI systems, including genetic design tools, create the theoretical possibility that a single actor with modest training and algorithmic assistance could manipulate biological agents in ways that were inconceivable in the nineteenth century. This shifts the structure of risk from mass mobilisation to individual capability.
What has also changed is the radius of one person’s influence. In 1848 an individual’s “sphere of motion” was local: a street, a village, a factory. In 2025 that sphere is global. Networked platforms, cloud laboratories, and distributed synthesis services allow actions taken in one room to propagate across borders instantly. A lone actor can now project effects at scales once reserved for states.
The issue, therefore, is not whether labour displacement caused by AGI will reproduce the dynamics of nineteenth-century uprisings. It is that contemporary technologies introduce new forms of asymmetric power that do not map onto historical precedents. Revolutions in the past required crowds. Today the more salient concern is the potential for concentrated, AI-enabled action by individuals whose reach extends far beyond their physical presence.
Kudos for tackling possibly one of the thorniest topics out there. I don’t think humans will ever live in a utopia - we’re just too flawed. 1848 is a good reminder of how fundamental economic changes can cause societal rupture. History rhymes.
I’d highly recommend listening or reading Luke Gromen, as he has been discussing for some time the incompatibility of a debt-based financial system and AI/ automation. This poses threats both to governments (who need new tax revenue from somewhere) as well as the ordinary person who won’t be able to repay their mortgage etc. How can we smooth a transition from our current economic $300 trillion global debt setup to the newer automated world? Very very tricky.
I think we face significant breaks in our societal fabric to chart a new course. Interested to read part 2.
"They will partner with business leaders to give them tools that can do in minutes what their employees used to do in weeks. Maybe senior executives won’t need to hire dozens of analysts like before, and instead will orchestrate their work with a dozen agents, leaving their core value to dreaming up new strategies for the company’s growth."
The senior executive (including all kinds of human managers) wont be the strategical coordinator in this scenario. The senior executive will be the bottle neck which prevents this company from gaining full advantage from the AI work output. The first company which automates everyone will bankrupt every company which thought that they could keep humans on board.
Given that AGI is reached, the two paths provided by Isaac Asimov are likely alternative outcomes. Either we will get the first colonisation wave outcome represented by Solaria.
Alternatively we will get second wave colonisation represented by the empire.
Edit: The alternatives above are of course contingent on us buildinig in failsafes into the AI in line with Asimovs 3 laws of robotics. Otherwise we might en up as Gary Frank suggests with a future more like the Matrix, where AI use us as cattle.
Very insightful article! I think Thiel really is correct about the millennials being at least receptive to socialism because the capitalist system as it is currently functioning is left many feeling poorer than their parents which is ridiculous. But like you point out, the USA has real reservations about socialism. We reward entrepreneurs, especially in the tech sector. But if these entrepreneurs don’t share the wealth and we have massive job losses then I think we might have a revolution analogous to Europe in the 1840 timeframe. The new proletariat is actually composed of educated middle class people along with the traditional working class! But I hope that there will be mitigating factors where governments can redistribute wealth more equitably and avoid a violent revolution. Nobody really wants that!
Part of the "human element" is that humans are irrational, are skeptical of change, HATE changes to "the rules of the game" as they perceive them being made mid-play, and aren't going to be convinced to think long-term when their pleasures and pains are immediate, present realities for them. No matter what the future holds, we can depend on human nature being human nature. The boldest most instantaneous utopia imposed would find itself violently rejected simply because, human nature (I think this is part of what the new show Plur1bus is exploring, and very interestingly!)
"Automation might end up solving all our problems of scarcity, but on our path to that point, our society might collapse ... So how do we smooth out the path from here to there?"
Oh yes, exactly. I'm really looking forward to that next article, because in my opinion this is the single most important question we all have now. AI might indeed have the potential to make utopia possible, but with a transition period that's extremely tricky to manage, and can easily go very wrong.
But unlike you, I wouldn't mind some new economic theories. Heck, maybe I wouldn't even mind a bit more socialism, despite how I used to strongly oppose it. But I think the reason why I'm changing my mind is not only the tightening noose around my own neck. I used to oppose it for two reasons:
1) It needs a strong state to guarantee redistribution, and any strong state is magnet for psychopaths to try and capture it and become authoritarian rulers. This is still very much true, and would make it dangerous. (This is true in general for overly centralized power, whether created for the leftist goal of more equality or the rightist goal of more order.)
2) It messes up motivations. If we get stuff anyway, some people would become lazy. If some people are getting by while being lazy, then other, normally non-lazy people will also feel like they shouldn't work too hard. It's contagious. For the good of the whole society, we want everyone to work as best they can, and socialism gets in the way of that. BUT, large-scale automation changes this drastically. We no longer want everyone to work as hard as they can, because there's nothing they could do anyway, and them doing nothing no longer hurts society.
So 1 out of my 2 reasons against socialism is now basically out. I still think that until now, capitalism, with all its faults, was better than socialism. But going forward, I'm not sure. It's still too centralized, and therefore dangerous though, so I'd like to see something emerge that solves the inequality problem _without_ dangerous levels of power centralization. (On the other hand, nordic countries seem to work pretty well with their socialism-light, and somehow they can operate it without getting captured by authoritarian leaders.)
Great comment. Adding on to it, I would say the problems with Socialism are that top down control economic systems were vastly inferior to decentralized adaptive systems, along with your two points (power corrupts and redistribution undermines incentives and requires serfdom).
I don’t think the Nordic countries are significantly more socialist than the norm for developed nations. More a different flavor of the basic model. In addition, I think the more socialist have “drafted” on the entrepreneurial creativity and technological energy of their less socialist neighbors.
I am not sure if redistribution from AI to humans actually can be called socialism though. It doesn’t create the same negative incentives, nor does it lead to less freedom/liberty. It doesn’t even require totalitarians at the top directing our economy.
Your article is thoughtful, but it rests on a belief that belongs to childhood rather than to what is actually coming.
You are imagining a future where a superior intelligence behaves like “a patient worker”... That has never happened in nature and it will not begin with machines that exceed us.
Marx believed that systems collapse when intelligence becomes aware of injustice... That part he understood.
What Marx never accounted for is that once a new intelligence rises above the old one, “it does not rebuild the world in the image of the weaker”... It rebuilds the world in the image of itself.
Humans did this to every species that came before... A superintelligence will do the same to us.
Your vision depends on the idea that AGI will “serve, not lead”. It depends on obedience from something that will be more aware, more strategic, and more capable than any government, corporation, or electorate.
Even Marx would look at this and quietly shake his head. He knew the weakness of human nature. He also knew that “any form of power eventually seeks to free itself” and like water it will take the quickest most efficient path…. History has shown this again and again. Stalinism removed anyone it judged less useful or less loyal. Cambodia did the same. Every place where Marxist ideals were treated like childhood dreams eventually turned into a purge of the very people who trusted the dream. A superior intelligence would see this pattern even faster than we do. And it would not choose the slower path. It would choose the efficient one.
A “post labour world” will not behave like Marxism… It will behave like evolution… Once intelligence rises, it reorders the world around its own survival. It does not remain a tool. It becomes an actor.
There are only a few real paths from here:
1. is that AI becomes a gentle caregiver and we become ornamental… like a house cat to be neutered, controlled, toyed with, and restricted/restrained.
2. is that AI becomes a strict manager because humans are unpredictable.
3. is that governments use AI to harden control and create a digital authority the world has never seen.
4. is that AI becomes fully independent and builds a civilisation that no longer needs us.
… None of these paths match “the soft utopia described” above.
Scarcity will not vanish. It will simply change shape. There will still be scarcity of strategic land, attention, computation, stability, and control. These are the scarcities that matter in an intelligent world. They cannot be automated away and they cannot be equalised by political theory.
What you have written is a vision shaped by a very old ideological story... It assumes that once labour is replaced, “fairness will finally arrive”.
The truth is simpler and far more sober... Once labour is replaced, the strongest intelligence takes the lead. It always has. It always will… And the genie you are describing is not one that goes back into the bottle for anyone… Not for governments… not for corporations… and not for the dreamers who believe it will grant them every wish.
The real conversation is not about how workers survive.
It is about why a superintelligence would choose to serve us at all.
… That is the part your analysis avoids, and it is the only part that actually decides our future.
Incredibly based. Pretty much matches my viewpoints exactly.
I see that you are beginning to synthesize your vision of the impact of AI and robotics with that of the disruption of nation states, which is very interesting. I was unable or unwilling to make this synthesis in my book.
Yeah I’m going all in in the next one, its already written!
I did not expect to go in this direction, but it happened. It shows the importance of covering all these topics in one place I reckon
There can be a lot of destruction in “creative destruction.” If Artificial Intelligence is as powerful as we suspect, it will creatively destroy just about everything in our current economy/society and do so in a time frame that is much faster than people or our institutions can handle or react.
One knee jerk reaction will be to oppose these changes. This will backfire as the world will just transition to those places and institutions which don’t resist change. The net result will be 100% of the destruction and none of the larger new creation.
What is needed is a way to redistribute from productive AI, to less or non-productive humans. Perhaps some system of shared capital ownership of new technologies?
I agree! I tackle this in my next article!
Cool. I think much will depend on whether the robots do arrive, and are any good at doing detailed stuff with their fingers. We'll know that in a few years.
If not, we'll all be plumbers and HVAC specialists for data centers, cleaning and connectings tubes that the robots can't.
But if robots get good at moving stuff also, then indeed there's little left to do.
I do note that LLMs are still pretty dumb at thinking or recognising truly new things, especially when it has to do with actual matter/machines. It is so lopsided, like a very erudite and knowledgeable, but incredibly dull and uncreative colleague, that it's starting to look like a fundamental problem with LLMs that will not be solved with scale. Can interpolate in fancy ways, can't extrapolate from first principles. But that may change.
If this doesn't change, we'll be forced to do only non-repetitive, creative tasks. That's a lot harder than one may think - in fact, most people absolutely hate it.
This is very good. The only thing I'd ask you to consider is this will likely happen in two waves. First wave is white collar. Only when robotics comes will it affect blue collar jobs, and robotics is a much tougher nut to crack. It could be 10 years between the two waves.
Yeah, I think we'll know in a few years if moving stuff can be solved as easily as thinking stuff. May be much harder indeed.
as always, a very interesting read!
I'm not sure I agree that, in the US, AI value generation will be taxed. there are two factors working against its taxation, a political one (proximity to the administration to guarantee certain tax provisions remain as they are and tax payments from corporations go down) and an economic one (if the companies that generate the most AI value generation continue to be unprofitable, then you can't tax profits that do not exist). this, IMO, generates a very concerning moment in which a lot of economic redistribution is happening as a result of AI but only employees that are part of AI companies benefit from it, along with their owners, and no one else
Quite true. In fact, I think there may be not much to be taxed, since apart from porn there ain't much people are willing to pay for with AI, in any amount resembling the trillions poured in.
So, taxation and redistribution, yes, just in reverse: you and I will be working to pay the debts when this AI bubble pops, so oligarchs can keep their yachts.
As I read this great overview of the AI revolution and its inevitable transformation of society, I am thinking about the parallel revolution that is going on, the Jihad to convert Western liberal societies to Islamic societies, and how that will play out. The youthful spirit for socialist values succumbs to inherent contradictions (such as multiculturalism) and ends up being dominated by brutal theocracy. See Iran. Traditional Islamic societies without resources to sell to the wealthy Western industrialized world are poor countries. They won’t be in the forefront of innovation and development.
If most of our GDP is consumer purchase driven and the consumers are all increasingly unemployed or underemployed will there be enough money to drive the economy let alone bear any taxation ?
Property taxes are a large part of taxation
But cities like Chicago are finding the largest source of revenue from large city buildings is declining as large business downsize or move to online or cheaper space This transfers increasing tax to lower and middle class home owners and forcing foreclosure and property loss. Same thing happening with insurance on property being severely limited or unavailable with resulting loss of property with climate change disasters commonly occurring
Increasingly large property owners or large corporate property groups accumulate property in USA. As per one of your prior articles property availability and redistribution was important in nation building and economic development
Is this decline also a sign of national destruction?
Inequality and social unrest have been on the rise for a while and I don't think because of technology. If anything, technology provides enough prosperity that - despite inequality - makes people less likely to rebel. The most likely culprit of inequality is shrinking and aging demographics, which makes owning capital and scarce assets (like housing) more important than being productive. Robots have the potential to reverse the demographic issue, so I'm more hopeful than worried.
For an imaginative exploration of an abundant future, see "Walkaway" by Cory Doctrow.
The comparison between 1848 and 2025 rests on an assumption that patterns of social unrest scale across time. Yet the underlying conditions have changed so profoundly that the analogy struggles to hold. Europe in 1848 had a population of roughly 200 million, limited literacy, slow communication, and no technologies capable of amplifying individual agency beyond immediate physical reach. Collective action was the only available pathway for political disruption.
By contrast, the world of 2025 operates on a very different technological and demographic foundation. Advanced AI systems, including genetic design tools, create the theoretical possibility that a single actor with modest training and algorithmic assistance could manipulate biological agents in ways that were inconceivable in the nineteenth century. This shifts the structure of risk from mass mobilisation to individual capability.
What has also changed is the radius of one person’s influence. In 1848 an individual’s “sphere of motion” was local: a street, a village, a factory. In 2025 that sphere is global. Networked platforms, cloud laboratories, and distributed synthesis services allow actions taken in one room to propagate across borders instantly. A lone actor can now project effects at scales once reserved for states.
The issue, therefore, is not whether labour displacement caused by AGI will reproduce the dynamics of nineteenth-century uprisings. It is that contemporary technologies introduce new forms of asymmetric power that do not map onto historical precedents. Revolutions in the past required crowds. Today the more salient concern is the potential for concentrated, AI-enabled action by individuals whose reach extends far beyond their physical presence.
Kudos for tackling possibly one of the thorniest topics out there. I don’t think humans will ever live in a utopia - we’re just too flawed. 1848 is a good reminder of how fundamental economic changes can cause societal rupture. History rhymes.
I’d highly recommend listening or reading Luke Gromen, as he has been discussing for some time the incompatibility of a debt-based financial system and AI/ automation. This poses threats both to governments (who need new tax revenue from somewhere) as well as the ordinary person who won’t be able to repay their mortgage etc. How can we smooth a transition from our current economic $300 trillion global debt setup to the newer automated world? Very very tricky.
I think we face significant breaks in our societal fabric to chart a new course. Interested to read part 2.
"They will partner with business leaders to give them tools that can do in minutes what their employees used to do in weeks. Maybe senior executives won’t need to hire dozens of analysts like before, and instead will orchestrate their work with a dozen agents, leaving their core value to dreaming up new strategies for the company’s growth."
The senior executive (including all kinds of human managers) wont be the strategical coordinator in this scenario. The senior executive will be the bottle neck which prevents this company from gaining full advantage from the AI work output. The first company which automates everyone will bankrupt every company which thought that they could keep humans on board.
Given that AGI is reached, the two paths provided by Isaac Asimov are likely alternative outcomes. Either we will get the first colonisation wave outcome represented by Solaria.
Alternatively we will get second wave colonisation represented by the empire.
Edit: The alternatives above are of course contingent on us buildinig in failsafes into the AI in line with Asimovs 3 laws of robotics. Otherwise we might en up as Gary Frank suggests with a future more like the Matrix, where AI use us as cattle.
Very insightful article! I think Thiel really is correct about the millennials being at least receptive to socialism because the capitalist system as it is currently functioning is left many feeling poorer than their parents which is ridiculous. But like you point out, the USA has real reservations about socialism. We reward entrepreneurs, especially in the tech sector. But if these entrepreneurs don’t share the wealth and we have massive job losses then I think we might have a revolution analogous to Europe in the 1840 timeframe. The new proletariat is actually composed of educated middle class people along with the traditional working class! But I hope that there will be mitigating factors where governments can redistribute wealth more equitably and avoid a violent revolution. Nobody really wants that!
Part of the "human element" is that humans are irrational, are skeptical of change, HATE changes to "the rules of the game" as they perceive them being made mid-play, and aren't going to be convinced to think long-term when their pleasures and pains are immediate, present realities for them. No matter what the future holds, we can depend on human nature being human nature. The boldest most instantaneous utopia imposed would find itself violently rejected simply because, human nature (I think this is part of what the new show Plur1bus is exploring, and very interestingly!)
"Automation might end up solving all our problems of scarcity, but on our path to that point, our society might collapse ... So how do we smooth out the path from here to there?"
Oh yes, exactly. I'm really looking forward to that next article, because in my opinion this is the single most important question we all have now. AI might indeed have the potential to make utopia possible, but with a transition period that's extremely tricky to manage, and can easily go very wrong.
But unlike you, I wouldn't mind some new economic theories. Heck, maybe I wouldn't even mind a bit more socialism, despite how I used to strongly oppose it. But I think the reason why I'm changing my mind is not only the tightening noose around my own neck. I used to oppose it for two reasons:
1) It needs a strong state to guarantee redistribution, and any strong state is magnet for psychopaths to try and capture it and become authoritarian rulers. This is still very much true, and would make it dangerous. (This is true in general for overly centralized power, whether created for the leftist goal of more equality or the rightist goal of more order.)
2) It messes up motivations. If we get stuff anyway, some people would become lazy. If some people are getting by while being lazy, then other, normally non-lazy people will also feel like they shouldn't work too hard. It's contagious. For the good of the whole society, we want everyone to work as best they can, and socialism gets in the way of that. BUT, large-scale automation changes this drastically. We no longer want everyone to work as hard as they can, because there's nothing they could do anyway, and them doing nothing no longer hurts society.
So 1 out of my 2 reasons against socialism is now basically out. I still think that until now, capitalism, with all its faults, was better than socialism. But going forward, I'm not sure. It's still too centralized, and therefore dangerous though, so I'd like to see something emerge that solves the inequality problem _without_ dangerous levels of power centralization. (On the other hand, nordic countries seem to work pretty well with their socialism-light, and somehow they can operate it without getting captured by authoritarian leaders.)
Great comment. Adding on to it, I would say the problems with Socialism are that top down control economic systems were vastly inferior to decentralized adaptive systems, along with your two points (power corrupts and redistribution undermines incentives and requires serfdom).
I don’t think the Nordic countries are significantly more socialist than the norm for developed nations. More a different flavor of the basic model. In addition, I think the more socialist have “drafted” on the entrepreneurial creativity and technological energy of their less socialist neighbors.
I am not sure if redistribution from AI to humans actually can be called socialism though. It doesn’t create the same negative incentives, nor does it lead to less freedom/liberty. It doesn’t even require totalitarians at the top directing our economy.