17 Comments
User's avatar
Ebenezer's avatar
5hEdited

>if an AI superintelligence doesn’t kill us all

This is what I'm most worried about personally. Did you read the "If Anyone Builds It, Everyone Dies" book? https://ifanyonebuildsit.com/

Once you factor this risk into account, the "as much automation, as fast as possible" logic stops making sense. Faster development increases the probability of issues such as the Grok MechaHitler incident, which we fundamentally have little knowledge of how prevent in a sound and reliable way (trust me, I used to work as a machine learning engineer). The insane thing is just about a week after the MechaHitler incident, the Pentagon signed a $200 million contract with xAI.

I don't understand why people are acting like this is going to turn out OK. It seems to me that we have a few people who are being extraordinarily reckless, and a much larger population of individuals who are sleepwalking.

I remember in the US, people didn't start freaking out about COVID when hospitals were filling up in China. It was only when hospitals started filling in the US that they truly realized what was going on. Most people are remarkably weak in their ability to understand and extrapolate the theoretical basis of a risk. The risk has to punch them in the face before they react. The issue is that for certain risks, you don't get a second chance.

Expand full comment
Tomas Pueyo's avatar

Yes, I'm very aware of the issue and have written about this a few times.

The risk of misaligned AGI is very high!

Expand full comment
Ed Schifman's avatar

The next seven to ten years will be a very hard time for many wage earners who cannot either obtain or learn another useful skillset to make a living and government will be charhed with finding solutions to this problem. The solutions I have heard this far don't cut it, so we are in for a very uncomfortable ride to UBI.

Expand full comment
Tomas Pueyo's avatar

Maybe 7-10 years. If we do reach AGI in the next 2-5 years, and superintelligence thus soon after, yes, we should be hitting these timelines...

Expand full comment
John Gross's avatar

Once scarcity is eliminated then there will be little incentive to compete. How do we stop a decent into Idiocracy? Or worse, Eloi preyed on by overlord Morlochs who have the knowledge (admin passwords?) to evade the machines that are supposed to control their depredations.

Expand full comment
Steve Mudge's avatar

Let's go back 500 years. Futurists might have been talking about how utopian the world would be with better medicines and healthcare, heating, air conditioning, automated horse carriages (cars), air travel, and such. Well, here we are and it's still not enough. This is still Duality--there's time and space but also upside and downside. The drama goes on. We can't build or manufacture ourselves into more happiness (if happiness is what your utopian agenda is about).

Expand full comment
Tomas Pueyo's avatar

Maybe!

Although the friction from desire to reality is shrinking every day. In the coming years or decades, it might be nearly non-existent!

Expand full comment
Suzanne C.'s avatar

Interesting… not trying to be provocative just curious how else we could explain all the extinctions and biological collapses we are seeing on the planet. I really admire your work and line of thinking as you are able to synthesize the big picture and use it to foretell future scenarios. Isn’t it important to recognize past civilization failures when they reached limits to their ecosystems boundaries? Perhaps you feel we haven’t run up against those boundaries?

Expand full comment
Tomas Pueyo's avatar

So apparently the pace of extinctions has slowed down dramatically! It used to be huge just before and at the beginning of the Industrial Revolution!

We ran against our boundaries in WW1 and WW2, and then with the pollution of the post-WW2. But we've overcorrected since. Now we're not techno-optimistic enough. Except for AGI, in which we're at the same time not optimistic enough, and not paranoid enough!

Expand full comment
Suzanne C.'s avatar

I truly hope I just have a blindspot in my worldview because I would prefer to believe that we have not entered into ecological overshoot for the planet’s resources. From what I have read I both fear that we are destroying ecosystems faster than they can regenerate (and provide our human needs for food, resources, technology, etc) and I also have hope for the techno-utopia you envision. It is my hope that we can hold space to solve both and not forget in the process the nature that we ultimately need to provide us our human needs too. In fact I am scared that we are at a turning point in human history where we have created these incredible technologies - and at the same time are crippling the planet’s biological ability to keep supporting our growth as a species.

Expand full comment
Steve Mudge's avatar

More population is sustainable but the catch is you have to live in an apartment tower in a very densely populated city. When Elon talks about ten billion more people that's what he's talking about, just a bunch of human-bots to work in his factories.

Expand full comment
Suzanne C.'s avatar

Thanks for sharing this. I’ll take a look. So important to always look at opposing ideas to eliminate my blind spots! Thanks for the conversation

Expand full comment
Douglas payne's avatar

These proposals include some very broad assumptions, the largest of which is probably that the AIs will function as you indicate when we really cannot predict if they will develop free will, and if so, where that will leave the populations. Secondly, this will require altering the attitudes and beliefs of literally billions of people from welfare recipients to the 1%ers in all nations. Since humans became "civilized" many traits developed that people will not give up voluntarily, such as tryin to tell other people what to do and how to live. Violence between humans is not just going to disappear overnight and possibly over centuries. The leadership in almost all countries (and certainly in today's USA, China and Russia) would go to war rather than morph into this type of world. Luddites are still with us and they will not just melt away.

Obviously you have put an extraordinary amount of thought into this and the article is extremely thought provoking. I would love to see how this all develops but at 80 with a bum ticker, I must leave that to my children.

Expand full comment
Alessio Quaglino's avatar

How does UBI not create inflation?

Expand full comment
Tomas Pueyo's avatar

Thanks! I stopped reading when it claimed the limits to growth were true and were tracking towards them

Expand full comment