Discussion about this post

User's avatar
Ebenezer's avatar

>if an AI superintelligence doesn’t kill us all

This is what I'm most worried about personally. Did you read the "If Anyone Builds It, Everyone Dies" book? https://ifanyonebuildsit.com/

Once you factor this risk into account, the "as much automation, as fast as possible" logic stops making sense. Faster development increases the probability of issues such as the Grok MechaHitler incident, which we fundamentally have little knowledge of how prevent in a sound and reliable way (trust me, I used to work as a machine learning engineer). The insane thing is just about a week after the MechaHitler incident, the Pentagon signed a $200 million contract with xAI.

I don't understand why people are acting like this is going to turn out OK. It seems to me that we have a few people who are being extraordinarily reckless, and a much larger population of individuals who are sleepwalking.

I remember in the US, people didn't start freaking out about COVID when hospitals were filling up in China. It was only when hospitals started filling in the US that they truly realized what was going on. Most people are remarkably weak in their ability to understand and extrapolate the theoretical basis of a risk. The risk has to punch them in the face before they react. The issue is that for certain risks, you don't get a second chance.

Expand full comment
JBjb4321's avatar

If there's no need to work anymore at some point, it's likely that a lot of people will get stuck in boredom, loneliness and apathy, like many retirees or lottery winners today.

So perhaps one thing we'll pay other humans for will be temporary relief from boredom. Hope it will look more like a comedy channel than a coliseum.

Expand full comment
42 more comments...

No posts

Ready for more?