Discussion about this post

User's avatar
Ebenezer's avatar
5hEdited

>if an AI superintelligence doesn’t kill us all

This is what I'm most worried about personally. Did you read the "If Anyone Builds It, Everyone Dies" book? https://ifanyonebuildsit.com/

Once you factor this risk into account, the "as much automation, as fast as possible" logic stops making sense. Faster development increases the probability of issues such as the Grok MechaHitler incident, which we fundamentally have little knowledge of how prevent in a sound and reliable way (trust me, I used to work as a machine learning engineer). The insane thing is just about a week after the MechaHitler incident, the Pentagon signed a $200 million contract with xAI.

I don't understand why people are acting like this is going to turn out OK. It seems to me that we have a few people who are being extraordinarily reckless, and a much larger population of individuals who are sleepwalking.

I remember in the US, people didn't start freaking out about COVID when hospitals were filling up in China. It was only when hospitals started filling in the US that they truly realized what was going on. Most people are remarkably weak in their ability to understand and extrapolate the theoretical basis of a risk. The risk has to punch them in the face before they react. The issue is that for certain risks, you don't get a second chance.

Expand full comment
Ed Schifman's avatar

The next seven to ten years will be a very hard time for many wage earners who cannot either obtain or learn another useful skillset to make a living and government will be charhed with finding solutions to this problem. The solutions I have heard this far don't cut it, so we are in for a very uncomfortable ride to UBI.

Expand full comment
15 more comments...

No posts