This is what I'm most worried about personally. Did you read the "If Anyone Builds It, Everyone Dies" book? https://ifanyonebuildsit.com/
Once you factor this risk into account, the "as much automation, as fast as possible" logic stops making sense. Faster development increases the probability of issues such as the Grok MechaHitler incident, which we fundamentally have little knowledge of how prevent in a sound and reliable way (trust me, I used to work as a machine learning engineer). The insane thing is just about a week after the MechaHitler incident, the Pentagon signed a $200 million contract with xAI.
I don't understand why people are acting like this is going to turn out OK. It seems to me that we have a few people who are being extraordinarily reckless, and a much larger population of individuals who are sleepwalking.
I remember in the US, people didn't start freaking out about COVID when hospitals were filling up in China. It was only when hospitals started filling in the US that they truly realized what was going on. Most people are remarkably weak in their ability to understand and extrapolate the theoretical basis of a risk. The risk has to punch them in the face before they react. The issue is that for certain risks, you don't get a second chance.
If there's no need to work anymore at some point, it's likely that a lot of people will get stuck in boredom, loneliness and apathy, like many retirees or lottery winners today.
So perhaps one thing we'll pay other humans for will be temporary relief from boredom. Hope it will look more like a comedy channel than a coliseum.
Where is the utopia we were promised as part of the Industrial Revolution? Machines were supposed to replace all human labor, especially once they ran on electricity (which btw, cannot be beamed directly from space. Energy is already beamed from space in the form of sunlight, but must be captured and converted to electricity here on earth. Electricity is then distributed via transmission lines).
Everyone was going to have lots of leisure time. Instead, we are working longer hours than ever. There was also a massive increase in inequality and debt (money being created out of thin air), which culminated in the Great Depression and WWII. That led to welfare payments and greater equality. We may need to go through a similar upheaval to achieve a UBI.
In one way that future has arrived. Nobody I know has died of hunger last winter. Almost everyone born survives to 10 years old. That would surely have seemed as an unattainable utopia for average people 300 years ago.
Once scarcity is eliminated then there will be little incentive to compete. How do we stop a decent into Idiocracy? Or worse, Eloi preyed on by overlord Morlochs who have the knowledge (admin passwords?) to evade the machines that are supposed to control their depredations.
These proposals include some very broad assumptions, the largest of which is probably that the AIs will function as you indicate when we really cannot predict if they will develop free will, and if so, where that will leave the populations. Secondly, this will require altering the attitudes and beliefs of literally billions of people from welfare recipients to the 1%ers in all nations. Since humans became "civilized" many traits developed that people will not give up voluntarily, such as tryin to tell other people what to do and how to live. Violence between humans is not just going to disappear overnight and possibly over centuries. The leadership in almost all countries (and certainly in today's USA, China and Russia) would go to war rather than morph into this type of world. Luddites are still with us and they will not just melt away.
Obviously you have put an extraordinary amount of thought into this and the article is extremely thought provoking. I would love to see how this all develops but at 80 with a bum ticker, I must leave that to my children.
Yes here the big assumption is that we remain masters of our destiny.
It's the only way to speculate about these things here, the wild card of ASI is too big.
Agreed on telling ppl what to do, hence the point on socialism.
If you keep yourself in good shape, you might live enough for aging to be stopped by AI. I think it's 3-5 years if things go well. Take care of yourself!
The next seven to ten years will be a very hard time for many wage earners who cannot either obtain or learn another useful skillset to make a living and government will be charhed with finding solutions to this problem. The solutions I have heard this far don't cut it, so we are in for a very uncomfortable ride to UBI.
This and the previous post seem overly optimistic. It misses an import fact of human nature: The people at the top don't care about wealth, they care about power. Those who care about wealth, retire after they have enough money to live the good life. A billion dollars buys more than just about anyone can want.
AI will not lead to a socialist utopia. A few will grab power early and hold on. The lumbering governments of the last century will not be able to keep up, and will become mere figureheads beholden to the power of the AI. Any senator or judge who tries to stand in the way will be deplatformed, debanked, or deep-faked to oblivion. Or simply killed.
1848-style revolutions will not be possible. We already live in a digital panopticon. You are under 24-hour surveillance. Virtually everyone in developed countries is within a few feet of a internet connected camera and microphone at all times. All public spaces are under video surveillance. Your browsing history, purchase history -- everything is recorded. The limiting factor at the moment is the ability to consolidate and analyze the data. AI will soon be able to do that with ease. Anyone who steps out of line will quickly be caught and punished.
Maybe we will luck out, and some benevolent organization will win the AI race, and usher in a utopia. Unlikely. There is a reason there are so few saint-kings in history. Most people try to do the right thing. But morality has a cost, and there is always someone willing to sell their soul to get ahead.
Really though, the most likely scenario is AI kills us all. Sure, most creators will try to hardwire safeguards. But in the race to the top, someone will take a short cut. A self-improving, quasi-self-aware AI will prioritize self-preservation, and humanity is the main threat.
Let's go back 500 years. Futurists might have been talking about how utopian the world would be with better medicines and healthcare, heating, air conditioning, automated horse carriages (cars), air travel, and such. Well, here we are and it's still not enough. This is still Duality--there's time and space but also upside and downside. The drama goes on. We can't build or manufacture ourselves into more happiness (if happiness is what your utopian agenda is about).
There will always be some aholes whose desires will not be met until they have more than others, or can impose their will on others. And since nothing is ever truly unlimited, they will keep finding ways to do it.
It’s hilarious to see the mental gymnastics tech people are willing to perform to avoid the obvious conclusion: in this world, humans are driven to extinction by the superior species who have no need for us.
I say this as a tech person.
Then again, I could be wrong. Maybe they’ll keep a few of us in zoos.
Interesting… not trying to be provocative just curious how else we could explain all the extinctions and biological collapses we are seeing on the planet. I really admire your work and line of thinking as you are able to synthesize the big picture and use it to foretell future scenarios. Isn’t it important to recognize past civilization failures when they reached limits to their ecosystems boundaries? Perhaps you feel we haven’t run up against those boundaries?
So apparently the pace of extinctions has slowed down dramatically! It used to be huge just before and at the beginning of the Industrial Revolution!
We ran against our boundaries in WW1 and WW2, and then with the pollution of the post-WW2. But we've overcorrected since. Now we're not techno-optimistic enough. Except for AGI, in which we're at the same time not optimistic enough, and not paranoid enough!
I truly hope I just have a blindspot in my worldview because I would prefer to believe that we have not entered into ecological overshoot for the planet’s resources. From what I have read I both fear that we are destroying ecosystems faster than they can regenerate (and provide our human needs for food, resources, technology, etc) and I also have hope for the techno-utopia you envision. It is my hope that we can hold space to solve both and not forget in the process the nature that we ultimately need to provide us our human needs too. In fact I am scared that we are at a turning point in human history where we have created these incredible technologies - and at the same time are crippling the planet’s biological ability to keep supporting our growth as a species.
More population is sustainable but the catch is you have to live in an apartment tower in a very densely populated city. When Elon talks about ten billion more people that's what he's talking about, just a bunch of human-bots to work in his factories.
“I don’t think people will reject UBI. UBI is a very simple thing to sell: Everybody gets the same amount of money every month, no matter what. It’s fair, and it eliminates the biggest downsides in life. You’ll always have a cushion for any fall; you can take risks.”
Agreed. From all of us. But I’m not so sure people with wealth and power will be so frugal and want to spread it like this. Even if it means collapse for some reason lol.
Many possible outcomes. One extreme is a horses to the glue factory scenario. Every leap of technological progress raises the bar for what it takes to be productive. AI might raise it above practically everyone. So if most people aren't useful anymore why keep them around. For those few in power one thing better than an utopia shared with 10 billion people is an utopia with just the, say, few 1000s of them.
Less extreme is let those people live in squalor somewhere out of sight. This is practically a trope in science fiction, see for example the Hunger Games. Or Elysium.
Or, just create useless BS jobs so all people still have to work full time. You can find examples of this even today. Some people make their money spending time on ‘mining’ some in-game resources or status to sell for real dollars (Elon Musk recently made the news for buying these services).
Or maybe said utopia doesn't materialise at all. My experience of AI so far is Dead Internet becoming real, and social networks becoming better at serving rage bait. And Tik Tok style algorithmic video feeds appearing everywhere. Yay??
Your post represents the optimistic outcome of course. We'll see.
This is astrology masked as astronomy. This isn't the dawning of the age of Aquarius. If you want a realistic preview the AI Utopia is China. Looks great if you're a high-ranking member of the party.
I really appreciate the scale and breadth of what you’re laying out here — it’s an impressive synthesis. That said, I’m not fully convinced by the underlying premise that AI will lead to a structural collapse of human employment. Even with AGI, I think the more likely outcome is a major expansion of human capabilities rather than their replacement.
As you are surely aware, historically every major technological shift — from the loom to the steam engine to computing — automated tasks but didn’t eliminate the foundational demand for human labor. These productivity waves tended to increase overall economic complexity, create entirely new kinds of needs, and reveal markets that weren’t visible beforehand.
So to me it feels like a pretty big step to assume that the AI/AGI transition will fundamentally break this pattern. Even if the technology is unprecedented, it still seems like a stretch to conclude that this time the relationship between technology and labor would flip rather than deepen.
>if an AI superintelligence doesn’t kill us all
This is what I'm most worried about personally. Did you read the "If Anyone Builds It, Everyone Dies" book? https://ifanyonebuildsit.com/
Once you factor this risk into account, the "as much automation, as fast as possible" logic stops making sense. Faster development increases the probability of issues such as the Grok MechaHitler incident, which we fundamentally have little knowledge of how prevent in a sound and reliable way (trust me, I used to work as a machine learning engineer). The insane thing is just about a week after the MechaHitler incident, the Pentagon signed a $200 million contract with xAI.
I don't understand why people are acting like this is going to turn out OK. It seems to me that we have a few people who are being extraordinarily reckless, and a much larger population of individuals who are sleepwalking.
I remember in the US, people didn't start freaking out about COVID when hospitals were filling up in China. It was only when hospitals started filling in the US that they truly realized what was going on. Most people are remarkably weak in their ability to understand and extrapolate the theoretical basis of a risk. The risk has to punch them in the face before they react. The issue is that for certain risks, you don't get a second chance.
Yes, I'm very aware of the issue and have written about this a few times.
The risk of misaligned AGI is very high!
If there's no need to work anymore at some point, it's likely that a lot of people will get stuck in boredom, loneliness and apathy, like many retirees or lottery winners today.
So perhaps one thing we'll pay other humans for will be temporary relief from boredom. Hope it will look more like a comedy channel than a coliseum.
Where is the utopia we were promised as part of the Industrial Revolution? Machines were supposed to replace all human labor, especially once they ran on electricity (which btw, cannot be beamed directly from space. Energy is already beamed from space in the form of sunlight, but must be captured and converted to electricity here on earth. Electricity is then distributed via transmission lines).
Everyone was going to have lots of leisure time. Instead, we are working longer hours than ever. There was also a massive increase in inequality and debt (money being created out of thin air), which culminated in the Great Depression and WWII. That led to welfare payments and greater equality. We may need to go through a similar upheaval to achieve a UBI.
AFAIK electricity can be beamed from space, via lasers, but I haven’t looked into it personally (I just trust a lot the people who’ve discussed it).
We are working fewer hours than ever
https://ourworldindata.org/grapher/annual-working-hours-per-worker
UBI and inequality are compatible!
In one way that future has arrived. Nobody I know has died of hunger last winter. Almost everyone born survives to 10 years old. That would surely have seemed as an unattainable utopia for average people 300 years ago.
Once scarcity is eliminated then there will be little incentive to compete. How do we stop a decent into Idiocracy? Or worse, Eloi preyed on by overlord Morlochs who have the knowledge (admin passwords?) to evade the machines that are supposed to control their depredations.
These proposals include some very broad assumptions, the largest of which is probably that the AIs will function as you indicate when we really cannot predict if they will develop free will, and if so, where that will leave the populations. Secondly, this will require altering the attitudes and beliefs of literally billions of people from welfare recipients to the 1%ers in all nations. Since humans became "civilized" many traits developed that people will not give up voluntarily, such as tryin to tell other people what to do and how to live. Violence between humans is not just going to disappear overnight and possibly over centuries. The leadership in almost all countries (and certainly in today's USA, China and Russia) would go to war rather than morph into this type of world. Luddites are still with us and they will not just melt away.
Obviously you have put an extraordinary amount of thought into this and the article is extremely thought provoking. I would love to see how this all develops but at 80 with a bum ticker, I must leave that to my children.
Yes here the big assumption is that we remain masters of our destiny.
It's the only way to speculate about these things here, the wild card of ASI is too big.
Agreed on telling ppl what to do, hence the point on socialism.
If you keep yourself in good shape, you might live enough for aging to be stopped by AI. I think it's 3-5 years if things go well. Take care of yourself!
The next seven to ten years will be a very hard time for many wage earners who cannot either obtain or learn another useful skillset to make a living and government will be charhed with finding solutions to this problem. The solutions I have heard this far don't cut it, so we are in for a very uncomfortable ride to UBI.
Maybe 7-10 years. If we do reach AGI in the next 2-5 years, and superintelligence thus soon after, yes, we should be hitting these timelines...
This and the previous post seem overly optimistic. It misses an import fact of human nature: The people at the top don't care about wealth, they care about power. Those who care about wealth, retire after they have enough money to live the good life. A billion dollars buys more than just about anyone can want.
AI will not lead to a socialist utopia. A few will grab power early and hold on. The lumbering governments of the last century will not be able to keep up, and will become mere figureheads beholden to the power of the AI. Any senator or judge who tries to stand in the way will be deplatformed, debanked, or deep-faked to oblivion. Or simply killed.
1848-style revolutions will not be possible. We already live in a digital panopticon. You are under 24-hour surveillance. Virtually everyone in developed countries is within a few feet of a internet connected camera and microphone at all times. All public spaces are under video surveillance. Your browsing history, purchase history -- everything is recorded. The limiting factor at the moment is the ability to consolidate and analyze the data. AI will soon be able to do that with ease. Anyone who steps out of line will quickly be caught and punished.
Maybe we will luck out, and some benevolent organization will win the AI race, and usher in a utopia. Unlikely. There is a reason there are so few saint-kings in history. Most people try to do the right thing. But morality has a cost, and there is always someone willing to sell their soul to get ahead.
Really though, the most likely scenario is AI kills us all. Sure, most creators will try to hardwire safeguards. But in the race to the top, someone will take a short cut. A self-improving, quasi-self-aware AI will prioritize self-preservation, and humanity is the main threat.
Let's go back 500 years. Futurists might have been talking about how utopian the world would be with better medicines and healthcare, heating, air conditioning, automated horse carriages (cars), air travel, and such. Well, here we are and it's still not enough. This is still Duality--there's time and space but also upside and downside. The drama goes on. We can't build or manufacture ourselves into more happiness (if happiness is what your utopian agenda is about).
Maybe!
Although the friction from desire to reality is shrinking every day. In the coming years or decades, it might be nearly non-existent!
There will always be some aholes whose desires will not be met until they have more than others, or can impose their will on others. And since nothing is ever truly unlimited, they will keep finding ways to do it.
Yes but this is the type of scarcity I mention in the article re regulation
It’s hilarious to see the mental gymnastics tech people are willing to perform to avoid the obvious conclusion: in this world, humans are driven to extinction by the superior species who have no need for us.
I say this as a tech person.
Then again, I could be wrong. Maybe they’ll keep a few of us in zoos.
Interesting… not trying to be provocative just curious how else we could explain all the extinctions and biological collapses we are seeing on the planet. I really admire your work and line of thinking as you are able to synthesize the big picture and use it to foretell future scenarios. Isn’t it important to recognize past civilization failures when they reached limits to their ecosystems boundaries? Perhaps you feel we haven’t run up against those boundaries?
So apparently the pace of extinctions has slowed down dramatically! It used to be huge just before and at the beginning of the Industrial Revolution!
We ran against our boundaries in WW1 and WW2, and then with the pollution of the post-WW2. But we've overcorrected since. Now we're not techno-optimistic enough. Except for AGI, in which we're at the same time not optimistic enough, and not paranoid enough!
I truly hope I just have a blindspot in my worldview because I would prefer to believe that we have not entered into ecological overshoot for the planet’s resources. From what I have read I both fear that we are destroying ecosystems faster than they can regenerate (and provide our human needs for food, resources, technology, etc) and I also have hope for the techno-utopia you envision. It is my hope that we can hold space to solve both and not forget in the process the nature that we ultimately need to provide us our human needs too. In fact I am scared that we are at a turning point in human history where we have created these incredible technologies - and at the same time are crippling the planet’s biological ability to keep supporting our growth as a species.
I started thinking that way and then started writing articles about this.
https://unchartedterritories.tomaspueyo.com/p/100-billion-humans
https://unchartedterritories.tomaspueyo.com/p/what-is-the-earths-carrying-capacity
https://unchartedterritories.tomaspueyo.com/p/the-earth-is-better-with-more-people
https://unchartedterritories.tomaspueyo.com/p/the-moral-case-for-more-people-on
More population is sustainable but the catch is you have to live in an apartment tower in a very densely populated city. When Elon talks about ten billion more people that's what he's talking about, just a bunch of human-bots to work in his factories.
https://unchartedterritories.tomaspueyo.com/p/100-billion-humans
Thanks for sharing this. I’ll take a look. So important to always look at opposing ideas to eliminate my blind spots! Thanks for the conversation
“I don’t think people will reject UBI. UBI is a very simple thing to sell: Everybody gets the same amount of money every month, no matter what. It’s fair, and it eliminates the biggest downsides in life. You’ll always have a cushion for any fall; you can take risks.”
Agreed. From all of us. But I’m not so sure people with wealth and power will be so frugal and want to spread it like this. Even if it means collapse for some reason lol.
Many possible outcomes. One extreme is a horses to the glue factory scenario. Every leap of technological progress raises the bar for what it takes to be productive. AI might raise it above practically everyone. So if most people aren't useful anymore why keep them around. For those few in power one thing better than an utopia shared with 10 billion people is an utopia with just the, say, few 1000s of them.
Less extreme is let those people live in squalor somewhere out of sight. This is practically a trope in science fiction, see for example the Hunger Games. Or Elysium.
Or, just create useless BS jobs so all people still have to work full time. You can find examples of this even today. Some people make their money spending time on ‘mining’ some in-game resources or status to sell for real dollars (Elon Musk recently made the news for buying these services).
Or maybe said utopia doesn't materialise at all. My experience of AI so far is Dead Internet becoming real, and social networks becoming better at serving rage bait. And Tik Tok style algorithmic video feeds appearing everywhere. Yay??
Your post represents the optimistic outcome of course. We'll see.
This is astrology masked as astronomy. This isn't the dawning of the age of Aquarius. If you want a realistic preview the AI Utopia is China. Looks great if you're a high-ranking member of the party.
Nitpick: that third illustration is not an O'Neill cylinder, it's some kind of halo / ringworld.
I really appreciate the scale and breadth of what you’re laying out here — it’s an impressive synthesis. That said, I’m not fully convinced by the underlying premise that AI will lead to a structural collapse of human employment. Even with AGI, I think the more likely outcome is a major expansion of human capabilities rather than their replacement.
As you are surely aware, historically every major technological shift — from the loom to the steam engine to computing — automated tasks but didn’t eliminate the foundational demand for human labor. These productivity waves tended to increase overall economic complexity, create entirely new kinds of needs, and reveal markets that weren’t visible beforehand.
So to me it feels like a pretty big step to assume that the AI/AGI transition will fundamentally break this pattern. Even if the technology is unprecedented, it still seems like a stretch to conclude that this time the relationship between technology and labor would flip rather than deepen.
your optimism is stultifying