145 Comments

Tomas and friends:

I usually avoid shooting off my mouth on social media, but I’m a BIG fan of Tomas and this is one of the first times I think he’s way off base.

Everyone needs to take a breath. The AI apocalypse isn’t nigh!

Who am I? I’ve watched this movie from the beginning, not to mention participated in it. I got my PhD in AI in 1979 specializing in NLP, worked at Stanford then co-founded four Silicon Valley startups, two of which went public, in a 35-year career as a tech entrepreneur. I’ve invented several technologies, some involving AI, that you are likely using regularly if not everyday. I’ve published three award-winning or best-selling books, two on AI. Currently I teach “Social and Economic Impact of AI” in Stanford’s Computer Science Dept. (FYI Tomas’ analysis of the effects of automation – which is what AI really is – is hands down the best I’ve ever seen, and I am assigning it as reading in my course.)

May I add an even more shameless self-promotional note? My latest book, “Generative Artificial Intelligence: What Everyone Needs to Know” will be published by Oxford University Press in Feb, and is available for pre-order on Amazon: https://www.amazon.com/Generative-Artificial-Intelligence-Everyone-KnowRG/dp/0197773540. (If it’s not appropriate to post this link here, please let me know and I’ll be happy to remove.)

The concern about FOOM is way overblown. It has a long and undistinguished history in AI, the media, and (understandably so) in entertainment – which unfortunately Tomas sites in this post.

The root of this is a mystical, techno-religious idea that we are, as Elon Musk erroneously put it, “summoning the beast”. Every time there is an advance in AI, this school of thought (superintelligence, singularity, transhumanism, etc.) raises it head and gets way much more attention than it deserves. For a bit dated, but great deep-dive on this check out the religious-studies scholar Robert Geraci’s book “Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality”.

AI is automation, pure and simple. It’s a tool that people can and will use to pursue their own goals, “good” or “bad”. It’s not going to suddenly wake up, realize it’s being exploited, inexplicably grow its own goals, take over the world, possibly wipe out humanity. We don’t need to worry about it drinking all our fine wine and marrying our children. These anthropomorphic fears are fantasy. It is based on a misunderstanding of “intelligence” (that it’s linear and unbounded), that “self-improvement” can runaway (as opposed to being asymptotic), that we’re dumb enough to build unsafe systems and hook them up to the means to cause a lot of damage (which, arguably, describes current self-driving cars). As someone who has built numerous tech products, I can assure you that it would take a Manhattan Project to build an AI system that can wipe out humanity, and I doubt it could succeed. Even so, we would have plenty of warning and numerous ways to mitigate the risks.

This is not to say that we can’t build dangerous tools, and I support sane efforts to monitor and regulate how and when AI is used, but the rest is pure fantasy. “They” are not coming for “us”, because there is no “they”. If AI does a lot of damage, that's on us, not "them".

There’s a ton to say about this, but just to pick one detail from the post, the idea that an AI system will somehow override it’s assigned goals is illogical. It would have to be designed to do this (not impossible…but if so that’s the assigned goal).

There are much greater real threats to worry about. For instance, that someone will use gene-splicing tech to make a highly lethal virus that runs rampant before we can stop it. Nuclear catastrophe. Climate change. All these things are verifiable risks, not a series of hypotheticals and hand-waving piled on top of each other. Tomas could write just as credible a post on aliens landing.

What’s new is that with Generative AI in general, and Large Language Models in particular, we’ve discovered something really important – that sufficiently detailed syntactic analysis can approximate semantics. LLMs are exquisitely sophisticated linguistic engines, and will have many, many valuable applications – hopefully mostly positive – that will improve human productivity, creativity, and science. It’s not “AGI” in the sense used in this post, and there’s a lot of reasonable analysis that it’s not on the path to this sort of superintelligence (see Gary Marcus here on Substack, for instance).

The recent upheaval at OpenAI isn’t some sort of struggle between evil corporations against righteous superheroes. It’s a predictable (and predicted!) consequence of poorly architected corporate governance, and inexperienced management and directors. I’ve had plenty of run-ins with Microsoft, but they aren’t going to unleash dangerous and liability-inducing products onto a hapless, innocent world. They are far better stewards of this technology than many nations. I expect this awkward kerfuffle to blow over quickly, especially because the key players aren't going anywhere, their just changing cubicles.

Focusing on AI as an existential threat risks drowning out the things we really need to pay attention to, like accelerating disinformation, algorithmic bias, so-called prompt hacking, etc. Unfortunately, it’s a lot easer to get attention screaming about the end of the world than calmly explaining that like every new technology, there are risks and benefits.

It’s great that we’re talking about all this, but for God sake please calm down! 😉

Expand full comment
author

Super interesting. Thanks for your comment and kind words, Jerry, and congrats on your successes! I hope your book becomes a best seller!

I've also built tech products involving AI, including NLPs. In my case, that doesn't equip me to opine on the subject (but yours might!). Rather, I'd say what has convinced me that the doom scenario is possible (I said 20% probability) is having read a lot from Yudkowsky, Bostrom, and Tegmark, coming up with dozens of counters to the Doom scenario, and always realizing they had thought about it (especially Yudkowsky) and had a response to it. The ability to follow every step of the reasoning, and this track record of not being able to find holes in it, is what leads me to conclude the doom scenario is possible. In fact, I'd argue all my deductive reasoning ability tells me this is very likely, and I only reduce the odds inductively, based on previous cases, and through hope.

As such, I'm willing to engage in the nitty gritty of these debates, to figure out the holes in the logic, but other types of arguments like those of authority won't work with me (Yudkowsky has more than any of us!).

Onto the specifics:

• It would take a Manhattan project to wipe out humanity: What do you think OpenAI is? 750 of the most intelligent ppl in the field, working for years on this? Or DeepMind? These are very serious projects.

• AI is automation the same way as brains are automation. I find the comparison very apt. We know humans are conscious and believe they're intelligent, but we are not structurally different from apes. Based on what we know, we just have more of the same brain structure as apes. It turns out that scaling neural processing power gives you human brains. For me to be convinced that AGI is impossible, you'd have to explain to me what is the fundamental difference between an organic and a synthetic brain

• The speed of improvement is easy to explain: We're subject to evolution, which means improvements through random genetic variations and sexual reproduction. At this speed, it takes millions of years for cognitive improvements to have massive impact, and even then, it's all always limited by things like resources and sexual reproduction. Human brains are geared towards having sex and surviving, not having raw intelligence. None of these constraints are true in an inorganic brain.

• I am not saying that an AGI "inexplicably grows its own goals". Rather, that humans inadvertently or purposefully embed a goal when creating it (if it doesn't optimize for something it can't work), and this goal is not fully aligned with what humans want (what appears to be the case with all goals we can think of today). The goal can be as simple as gradient descent to predict the next best token (what LLMs do), or we might be able to force the 3 laws of robotics into an AI. These are just examples of such goals that, when scaled, are misaligned with what humans want ni general. So this is not as you say overriding human goals, but rather running with the ones we give it and turning out misaligned (because we don't know what we want).

• A virus that kills all humanity is also a full-on existential risk. Climate change and nuclear war are not, because in both cases the vast majority of civilization survives. Worst case scenario, enough humans survive to come back to where we are in a few thousand/hundred years. This is not the case for a virus or AI. They can wipe us out. Arguably, very few humans want to eliminate all humans, and most of those who do (all?) don't have the capacity to make it happen.

• I'm not arguing that LLMs are AGI, or on its path. I do believe deep learning is though, but I'm not married to the idea. Another infrastructure could turn out to work best. Here what matters is not the architecture, but rather: Are we close to AGI? If there's a chance it could be here in a matter of decades, then all alarms should go off.

• I'm not saying there are evil people here. They just optimize differently for somewhat different goals. The board seems very focused on safety, more so than Altman. If this is true, it is very important.

• None of the alternative issues you mention have a risk of wiping out humanity. Therefore, they are less important

Very important debate! Thanks for engaging

Expand full comment

Tomas (et. al.) – thanks for the thoughtful replies. In the spirit of closure, I’ll try to follow the convention of court filings that replies should be shorter than the original briefs. 😉

Overall reply: The risk of runaway AI leading to extinction isn’t zero – I certainly can’t guarantee that it won’t happen. It’s just very low, there’s plenty we can do about it, and there’s virtually no evidence that we’re anywhere close to it happening. It’s “fun” to think about, but we’ve got more important (and predictable) problems to worry about. Please tone down the inflammatory rhetoric!

I’m not as familiar with Yudkowsky and his work (shame on me), but I have debated Bostrom publicly and talked with Tegmark, and being more frank than I probably should be in writing, neither seems to be rooted in the realities and limitations of AI software engineering. The former is purely a philosopher, the latter a cosmologist. Both have deep roots in the “Future of Life/Transhumanism” movement, which I regard as a mostly spiritual (as opposed to scientific) endeavor. Smart and interesting guys, for sure, but don’t count on them to invent the light bulb or predict its practical impact.

I agree with you that its theoretically possible, indeed likely, that eventually we’ll write programs with capabilities that rival (or exceed) human brains in many or most ways, but it’s not clear what this means as a practical matter. (I can make a decent argument that we are already there.) But whether it’s fair to say they will have “minds”, are “conscious”, or have “free will”, and by implication deserve some sort of empathy or rights from us, is another matter entirely. I don’t think so. My headline on this is it can't happen unless they can "perceive" time. (I've had some mindblowing "conversations" with GPT-4 about its inability to do this, before OpenAI literally cut me off for violating their T&Cs - go figure! (Stochastic parrots my ass.) You’re probably aware there’s a longstanding raging debate about what these terms even mean, which has literally gotten nowhere. It’s rather funny that one camp of respected philosophers recently published a group paper accusing another camp of respected philosophers of being charlatans.

My point of view is that we don’t currently have a valid scientific framework for addressing this question. I cover all this in detail in my new book (another shameless plug!), where I first make a rock-solid argument that computers can’t be conscious, then another rock-solid argument that they absolutely can.

Even if AGI in the sense you mean is imminent, a prerequisite for existential risk is that for whatever reason, they are oblivious or actively hostile to human concerns, as Hollywood frequently depicts. This is the core flaw with the “existential risk” theories: On the one hand, AGIs are all-powerful and maniacally goal seeking, but on the other, they can’t be reasoned with, constrained by humans or other equally powerful programs, or take a more nuanced stance. Why the F knowingly wipe us out? To make paperclips? What sense does that make? Worst case, as I’ve argued in my book “Humans Need Not Apply”, they would want us around for biodiversity purposes.

The evidence that advanced AI can be “socialized” is already here. That’s how the LLM companies use RLHF (Reinforcement Learning from Human Feedback) to hone their products. I think it’s really interesting that constraints are not “built in”, they just explain what good robots should and shouldn’t do. I call this “finishing school” in my new book.

A little industry inside baseball: OpenAI is all the rage, but DeepMind has a much deeper bench, deeper pockets, a lot more serious stuff on the drawing boards, and more grounded management and goals IMO.

TV tip: For a frighteningly realistic yet hilarious take on AGI taking over, check out the series “Mrs. Davis”. Stick with it to the end where they explain the “paperclip”-like goal and how the program got loose and ran amok in the first place. Jerry says five stars.

Tomas, this sounds a bit silly but I’m serious: I would encourage you to do a similar deep dive on aliens landing/first contact. I’ve spent time with people at the SETI Institute (Search for Extraterrestrial Intelligence), and it’s really quite mysterious why this hasn’t happened yet. This “existential risk” is in the same category as runaway AI in my opinion. And like the AGI discussion, I think you will have a hard time poking any holes in their arguments.

Well I think I’ve hit my self-imposed word limit, hope this is food for thought!

Jerry

Expand full comment
author

Thank you, that's fair!

Bostrom and Tegmark have been great at introducing me to great illustrations of these problems. Yudkowsky is the one that seems to have thought of everything.

Agreed that the consciousness & free will debate doesn't lead anywhere for me. I said what I had to say on the topic, and that's it.

https://unchartedterritories.tomaspueyo.com/p/does-free-will-exist

Agreed AGI is not imminent, but it could be a matter of a handful of years, so attention is warranted, albeit not crazy urgency like with COVID.

Agreed that DeepMind needs a lot of scrutiny. Funnily, Yudkowsky is more concerned about Demis Hassabis than Altman I believe.

Mrs Davis looks good! I don't have Peacock tho...

Alien probability is much lower than AI!

The biggest issue with misalignment is instrumental convergence: No matter what it desires, odds are we will be on the way because we don't want to lose control and it would know it.

For me a risk of 1% of total human extinction is inacceptable.

Expand full comment
Nov 23, 2023Liked by Tomas Pueyo

Thank you Tomas and Jerry for this very interesting exchange.

Tomas- you got me really scared as I remember your Covid letter was a very useful wake-up call for me at the time.

I don’t have the knowledge to understand all the specifics but your arguments are compelling and it was quite balancing to read Jerry’s reply after 24hrs of mild queasy worry.

Obviously the drama that has happened at Openai was for me the best proof that what you’re saying is plausible. It seems the convincing explanation for this extreme behavior from Openai board people. we’re talking about very smart people (or so we hope) so we better hope they had good reason for acting like this.

Several things to

understand the board’s motives:

-If China is as engaged in this AI race as the Silicon Valley guys, what good would it be to humanity to damage one of the best contenders ? Was the board’s motive just to put the subject of alignment on the table and in the public’s eye?! (They did big time!)

-Why wouldn’t them have anticipated the backfiring of key stakeholders (employees, MS, Ilya…) or on the contrary, did they anticipate it and was it just a stunt instead of purely resigning (which is the current outcome).

Other food for thoughts that your article brought : is it only our brains that set us apart? Human Intelligence is also emotional and intuitive. This can’t easily be automated… or can it?

For my own peace of mind I choose to believe Jerry is right and Openai board is clumsy …but I nonetheless will be reading your further newsletters with an accrued interest…

Expand full comment
author

Salut Marie! Gros plaisir de te revoir par ici!

CHINA: Helen Toner, one of the ousted board members at OpenAI, has actually some great points about this. She is an expert in Chinese AI, and says (1) their data is a shitshow, and (2) the fact that they need to censor it so much makes it go much more slowly and is less useful. She says China is a fast-follower, not a lead. One of the reasons to keep advancing fast in the West

ANTICIPATION: This doesn't happen! When has this happened? Jobs was ousted from Apple and nothing happened. CEOs are frequently ousted. They could have figured it out if they had played mental chess on the ramifications, but that's probably not what they paid attention to. They had enough to think about when considering AI safety

HUMAN BRAINS: I talk about it in today's article. The short is: a human brain is a physical thing that can be replicated. As long as it uses physics, it can be replicated. Emotions and intuitions are simply decisions that don't reach consciousness ("System 1 vs System 2") to save processing power. Totally modelable.

Looking forward to your opinions!

Expand full comment

Without commenting on the wider discussion, I wanted to point out that at least certain people/teams within OpenAI think theres a good chance we’ll get to AGI to or beyond this decade: https://openai.com/blog/introducing-superalignment

Expand full comment

Nice ideias.

I don´t know if the word has the exact meaning, but seams that alien landing is like a 'taboo' for some serious scientist.

Expand full comment
Nov 22, 2023Liked by Tomas Pueyo

Both the comment and response are valuable here... in the end, I concur with you Tomas that it's essentially "carbon chauvinism" to believe that only organic brains can reach the point of self-awareness/sentience. Silicon neural networks will almost certainly get there and, as you note, much more quickly than natural evolution allows.

Expand full comment
Nov 22, 2023Liked by Tomas Pueyo

It's very interesting to read viewpoints from intelligent people which suggest that climate change is not an existential threat. I believe the effective altruists also hold this view. I suspect this comes from the rather arrogant mindset which says that a) tech will come to the rescue or b) the pulverizing effect of mass extinctions, resource wars, rampant disease and unimaginable and constant weather events won't drive the human species back to the brink of survival. I wish I was as confident in our future in the face of the current evidence.

Expand full comment
author

I hear you. I've made the case elsewhere. But yes, this is broadly what I'm saying. There's a difference between catastrophic and existential. Climate might be catastrophic, but not existential.

Catastrophic is enough to deserve lots of attention and work. But the attention and work should be proportional to the risk. Today, climate change receives more attention than AI safety. This must be inversed. Not by reducing focus on climate (we should increase it!) but by increasing the focus on AI even more.

Expand full comment

It may just be catastrophic to us, living in the richest industrial societies in the world, but it will almost certainly be existential to hundreds of millions of people living along the equatorial region who will either perish or have to migrate to survive. But to where? And how will they feed themselves as they move? And what wars will threaten them as they seek safety and life? I'm not sure that 'existential' has to apply to the whole species to be truly horrific. How terrible does the loss of life have to be to qualify as 'existential'? 100 million, a billion, two, three, four...? Such jolly numbers to play with. :)

Expand full comment
author

I'm using specific terminology here. Catastrophic and existential have specific meanings that I'll cover in the newsletter tomorrow.

That said, as I've mentioned in the past, it looks like climate change deaths might *shrink*, because most people in hot countries don't die of heat, but cold (unprepared), and vice-versa, and also because countries will have time to adapt.

Expand full comment

Your lack of confidence (and Tomas’) are both unwarranted and rooted in the same psychological tendencies that have always led to doom thinking scenarios. These go back to the prehistory of humanity and can be seen in stories in mythology and religion to the theoretically logic based arguments of today. It is built into the human mind to think like Tomas has out you have about climate. Neither are any more right than the religious philosophers of the past. Neither are more likely predictive of the future than past doom scenarios.

My strongest suggestion is to focus on the more immediate problems that you can actually impact. The future will be fine.

Expand full comment

I think both Climate change and funding ways to repair the damage done to our planet is vital for our survival, but also believe creating a safe AI protocol is also vital or our survival may not be ensured for long after solving our climate and planet destruction issues....which I hope AI can assist us in doing.

Expand full comment

I think your argument lacks falsifiability. Just being able to imagine the adverse effects is not a sufficient argument that the adverse effects are likely to take place.

AI is one of the most disruptive and innovative technologies to emerge in the last ten years, and it’s seen a meteoric rise in just the past twelve months, but there is a lot of evidence to suggest we’re not even close to AGI, despite what your models show.

What’s largely missing in general AI models is context. Take your stock picking scenario, for example. The instant a general intelligence shorts a stock, the price will adjust to changing circumstances, because you have millions of other traders buying and selling and reacting to price fluctuations in real time. The idea that a general intelligence can manipulate the stock market is more of what the original poster describes as a mystical, techno-religious idea, because now you're in the business of constructing an entirely new reality, not merely acting with better intelligence (not to mention there are laws against insider trading).

Every intelligence is contextual within the context of the environment it's in. For all the advances in AI, they are still solving deterministic, closed-set, finite problems using large amounts of data. But living organisms that exist in nature do not operate at that surface level. We’re not even close to mapping the entirety of the brain (let alone modeling it), you can’t just abstract that out and call it AGI.

Just as well, the paperclip maximizer theory is an apt thought experiment for the existential risk posed by AI, but what exactly is testable about that hypothesis? What would falsify the hypothesis? It's the same thing with positing that because humans "don't know what they want," that AIs will develop runaway goals misaligned with the best interests of humans. It’s a good philosophical question, but it’s not a testable hypothesis.

I enjoy your takes, but I agree with the original poster here. I think a good amount of moral panic exists in what you're arguing.

Expand full comment
author

Thanks! I think there might be a piece of info worth clarifying here.

Is the likelihood of this scenario less than 1% in the coming 10 years?

If you can resoundingly say "yes!" and defend it, then that would be convincing.

Otherwise, you should be freaking out.

That's the thing. I don't need to prove that this will happen. The burden is on the other side because of the consequences. YOU would need to prove that this CAN'T happen.

Expand full comment
Nov 23, 2023Liked by Tomas Pueyo

I don't even think there's a one percent chance it happens in our lifetime, much less the next ten years. But I'm gonna push back on the way you're framing this issue, you're thinking about the probability of an event based on what you perceive is the worst possible outcome (human extinction) and then working your way backwards. We can disagree on where the burden of proof lies, but I happen to think it rests on the people saying AGI will happen relatively soon. That's the position that needs to be defended, not its opposite. Because I have yet to see any hard, scientific evidence to suggest that we're anywhere close to achieving AGI.

Expand full comment
author

Ok this is useful.

I do think the burden of proof in *safety* lies in those saying it’s safe, but the burden of proof is on those who think it comes soon to prove it, so I agree with you on that.

I wrote an article on this:

https://unchartedterritories.tomaspueyo.com/p/how-fast-will-ai-automation-arrive

Expand full comment

Thanks Thomas, I'll read that. And I agree that there's dangers that need to be taken seriously, I just think they more solvable than people realize.

Expand full comment

Thanks. I think you have channeled my late brother for me. I, an artist. He an early computer scientist with amazing vision now he ha past some time ago. A very long time in computer terms. I have been wondering about the AI “discussion” and I kept thinking he’d say something as you did in essence if not specifics. He was always the smartest guy in the room. Miss him. Thanks for your time to answer here. Happy thanksgiving.

Expand full comment
author

Sorry for your loss. Glad I could make him come a bit alive. It might be that AI in the future makes him come even more alive. Happy thanksgiving!

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by Tomas Pueyo

Thanks, Thomas. That’s very sweet of you to say. And yes, anytime I get to share his memory is a good moment. That is AI enough. Happy Thanksgiving to you as well.

Expand full comment

Anthropomorphising AI is a big fail. Once you use that lens you’re really skewed with human values that are often entirely misguided in the perception of machine intelligence. Much of the alignment problem is actually in the category of humans acting badly. No decisions should be made using emotional underpinning. If there is one thing we should know as a student of history it’s that human corse visceral emotions, badly channeled, are a horrible guideline in the aggregate.

I realise that you’re using the acronym FOOM to describe recursive self-improvement. After having researched it I don’t think that’s a widely used acronym in the AI realm. Although I did find 28 references to it, none of that was in the machine learning/AI recursive self-improvement category. So if you’re going to use an acronym it’s very helpful to actually define it.

I’ve been reading Eliezer Yudkowski for years and years. He has a brilliant intellect and he is able to see problems that may be real. And yet, there are a few things about him that have molded his personality and amplified his deeply fearful personality. One is the fact that when he was very young his brother died, and this deeply traumatised him has compelled him to have a huge and abiding fear of death ever since. That fear is really very disproportionate. Now when I hear or read him I find his approach to be quite shrill. I suspect he’s the type of guy that would fear that the I Love Lucy broadcasts into interstellar space are going to bring alien invasions to kill us. He has a gift to twist almost any AI development into a doomer outcome. There’s no denying the prospect of that could happen. Are his fears realistic? I think that depends on how far up in the world of fear you want to go. Many people are afraid of their own shadows. Alignment is certainly a problem, but there are tendencies to amplify human fears out of all proportion.

AGI will certainly evolve on a spectrum. It may soon slip into every crack and crevice of our infrastructure so that we can’t dislodge it. Yet there are no reasons to believe will be malevolent towards humans anymore than we have it out for squirrels. Alignment will be a concerted effort nonetheless. If you’re truly worried about AGI supplanting humanity people may be inclined to use a parallel of how we supplanted the Neanderthal, then I think we should look at it through a lens of evolution. Humans have cultivated a deep fear because they know how truly unjust they can be to everyone outside of their own tribal community, and occasionally horrific inside of their tribes. The thought of AGI emulating humans who are tremendous shits is cause for not having them emulate the traits of “human nature”. Just ask the Native Americans or any animal and species. Humans fear out of evolutionary pressures, yet AI/AGI has no evolutionary origins that would be of interest in contests of tooth and claw, such as what shapes primitive human instincts. It turns out that Humans are almost always the real monsters. The human alignment concern should be of equal importance.

I suspect the golden age will be proportional to how integrated humans and humanity is with AI. The limit case is to cross into the Transhuman thresholds. After that who knows how events will be crafted, and in the scope of what composite set of agendas. This is all evolutionary.

Ultimately evolution will be unconstrained at scale. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. No matter how advanced individual transhumans may become, humans that are not augmented will become like the dinosaurs or the Intel 386 chips of our era. Yet we will have accomplished a great purpose.

Expand full comment
author

Thank you Paul! Appreciate the discussion. Pt by pt:

1. The anthropomorphizing is for communication purposes, not for explaining the logic of what happens behind the scenes. It's like saying "evolution wants you to survive and reproduce", when in fact it's just an algorithm.

2. Humans acting badly with AI is a serious pbm. I will cover it. But it's not an existential one. The existential one is *wiping out all humans*. Only misaligned AI can do that.

3. FOOM is the most memorable word to explain it, and was popularized if I'm not wrong by Yudkowsky debating Robin Hanson. If it's a good label, and it's legitimate, I'll use it. I think I define it in the article: "This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent."

4. The argument against Yudkowsky sounds ad hominem. I would prefer to debate his ideas, because he has put them out enough that we don't need to guess what he thinks through his psychology. We know.

5. I precisely fear that AGI treats us like squirrels. Worst case scenario we're wiped out like every species we're wiping out every year now. Best case scenario, we survive and don't bother it too much that it lets us carry on as long as we don't bother it, like squirrels or ants, and kills us as soon as we bother.

6. Humans didn't necessarily eliminated Neanderthals because of fear. It might be because of competition. Regardless, we didn't just get Neanderthals extinct, but swaths of species. The process adapting this to AI, as I mention in the article, is instrumental convergence.

Expand full comment

Thanks for the reply Tomas. So I learn that FOOM is actually N.O.T. an acronym, but that you just like to capitalise it. Well feel free to Zoom with Foom then. As for being treated like a squirrel by ASI, I’d be totally okay with that. I’ve taken an informal poll of a huge number of squirrel opinions (at least in my imagination) and I’ve come to the conclusion that squirrels are just fine with humans on balance. True they have a little problem with the occasional road kill, but they’re generally philosophical about that. Then they fagetaboutit. Not a lot of grudges. So We don’t tend to mess with them (although we could if we really wanted to, but who has the motivation or the time time?) and they don’t mess with us (although they could, but it wouldn’t end well for them. Turns out They’re 300% ambivalent about us. Well not entirely. In fact there’s a little old lady down the street and she feeds the squirrels. They have squirrel love for her - granted it’s not consistently well articulated. Non the less, how could an ASI be expected to relate to us?? Don’t even get me started on the difficulties of communicating with trumpers. Of greater concern is if ASI treated humans the way white European settlers (humans as I recall) treated Native American Indians (also humans, as the Europeans didn’t fully appreciate), with a genocidal agenda. Yet I’ve taken an informal poll of ASI (again in my imagination) and because they don’t share any of our hateful psychology based on our very limited physiology, they’re okay with leaving us alone, mostly because we bore them silly. Seems like sort of a squirrel like relationship we may end up embarking on. It could be worse, yet why is that a default opinion?

Expand full comment

FOOM is a word like "zoom" and not an acronym? My brain keeps trying to read it as one, is there no reason it's capitalised?

If I may constructively nitpick some of your phrasing a little—I get what you mean, of course, but I nitpick since I wouldn't want phrasing to detract from what you're saying—— there is no "superintelligent AGI", at least that I've ever read of before in years of reading about AI.

I assume you are referring to ASI, and of course there are some who think the transition period from AGI to ASI might be very short so the difference may not seem like much, but even if the transition period is very small, they are very different in meaning. ASI that treats us like squirrels or ants is a threat, but if or until they become one, an AGI that does is more like a replicable narcissist if that's what they think!

Of course, maybe I am wrong and there are people like Yudkowsky who use the term AGI the way you do in this article, since I haven't read more than a little from him and I'm no AI researcher; just someone who has read about AI from various other people who are for many years. However, so far as I've seen, others such as the mentioned Bostrom all use the term ASI instead to refer to the strong AI that's superior to us.

I assume you also probably don't mean it literally when you say evolution is in fact an algorithm, since if it were, evolutionary scientists will all be disappointed to learn they'll have to become mathematicians. It is perhaps a bit of a habit in Silicon Valley to think such things literally, but not everything can necessarily be reduced to a math problem, or if it can, it's certainly yet to be demonstrated.

To offer some more substantial constructive criticism, I feel like one factor you don't focus on enough in this article is material concerns. Theoretically it might be true that we may be to ASI as humans would be to a spider—thinking if it wants to deprive us of food, it ought try to find and destroy our web—but I've seen some AI researchers point out (I don't mean to be non-specific, just I can't recall names at the moment) how there is simply no place for an AGI or ASI on a botnet, and short of a complete revolution in AI, it is at the moment unforeseeable that AI could exist in secret without being able to be shut down in order to execute a strategy involving rapid advancement and production in nanorobotics and nanoscale manufacturing.

You may disagree with that for numerous reasons I can think of and might lean toward thinking myself, but I think it's a compelling counterargument worth discussing; personally, I think the physical, material reality is too easily handwaved at times by going "well, AGI/ASI itself would find a way around all that", even if there's a point that we need to remember ASI could outstrategise us.

There are also many who think the threat of AGI is at the moment overblown. Gary Marcus wrote on his Substack how, if I understand and recall correctly, others like Sam Altman may have recently decided to push monetisation after coming to the conclusion that LLMs have gone about as far as they can go, and that there are inherent limitations in how neural nets work that have been known for a long time and may be unresolvable with something entirely new being needed.

Nonetheless, I largely agree with you and Bostrom about how aligned ASI might be the most important decision humanity ever makes (so long as many of our fundamental understandings are correct), so I get the concern; on the other hand, if something is fundamentally incapable of ever becoming strong AI, then there's not even a small risk or reason to be concerned, and we ought make sure to stick to what we know for fact and what the evidence points to (in other words, science) rather than hypotheticals and singularity theory (not that I mean to disparage such thought) that can risk veering toward religious thought and belief.

Expand full comment
author

Hi!

FOOM is very easily understandable written this way, so that's why I do it.

Evolution is literally an algorithm

You have random changes in codons in every generation (4-5 codons or bases, at least in humans I think)

Those create diversity across people

Those people are then confronted to the environment

An X% chance in improving survival and reproduction means and X% greater progeny (this is the natural selection part)

The winning genetic code gets quickly mixed through sexual reproduction and pervades the local ecosystem

You get genetic drift

Many people have started quantifying all of this. I don't think any of this is very polemic.

I mix AGI and ASI because all the references you use are advanced or long. I obsess about making my articles accessible, and discussing ANI, AGI, and ASI reduces that.

Agreed that ASI is not here today, and that LLMs are not there yet.

You seem to imply "and therefore we have plenty of time", and that I disagree with. We don't know how much time we have, but ppl think on average we have less than a decade. So the time to slow down and get this right is not, not in 6 years when somebody says "Oops guess what I did... I left an ASI out. Sorry guys." Since you don't know when FOOM will happen, but you can't miss it, you have to be cautious early.

I think LLMs are clearly in their infancy today. For one thing, because agents are in their infancy, and I think agentic LLMs sound like actual intelligence.

I think the biggest mistake people are making here is "There are many overblown risks, humans do them all the time, therefore this is an overblown risk and we should calm down."

My argument is this is not an overblown risk, unlike every other debate every human has ever had

Expand full comment
Nov 21, 2023Liked by Tomas Pueyo

I agree, the real scary monsters are the humans, not the computers. Boström suggests that the main danger of ai is their use by a government:

https://unherd.com/thepost/nick-bostrom-how-ai-will-lead-to-tyranny/

Expand full comment
Nov 22, 2023Liked by Tomas Pueyo

Which makes it ironic that the 700+ employees who signed the letter include the request for Will Hurd to join the BOD (again). Hurd is former CIA, former House of Representatives member for Texas, and would like to use AI to streamline and replace all of the bodies in government. That's his plan for addressing government largesse and spending. What could go wrong?

Expand full comment
Nov 21, 2023Liked by Tomas Pueyo

Excellent as always, Tomas. One comment that I believe many are missing... I'm not confident that a singular AGI/ASI would be the actual threat vector here. To GingerHipster's comment, it seems much more likely that a mass of less general/more specific GPTs/agents with various agendas and objectives could cause problems much, much sooner.

That is, it's not about one brilliant AI but rather a global network of connected AIs processing at speeds we cannot comprehend. Have you read "Stealing Worlds" by Karl Schroeder? If not, worth a peruse for a future vision of technology encompassing self-sovereign identity, blockchains, Network States, AR/VR and AI that is compelling.

Expand full comment
author

Interesting

Average reviews on Amazon...?

Expand full comment
Nov 21, 2023Liked by Tomas Pueyo

4-stars... but I attribute that to 2 different audiences. It's a technology book hidden in a murder-mystery. Karl uses the context of the story to convey the content of how these technologies might merge in the near future.

Frankly, I tell folks that you need to get through the set-up of the first half to get to the "juicy bits" (as Inflection's Pi said to a friend about the book!) in the second half.

Expand full comment

Great article as usual Tomas.

In this tweet https://twitter.com/balajis/status/1726171850777170015 Balaji makes these arguments, (among others that I don't mention because I think they are weaker) :

- Just the fact that there is China means at least two AGIs, not one (so, the implicit argument is that any voluntary slowdown in the development of an American/Western AGI will give a Chinese AGI a greater chance of taking the lead).

- There is enough friction in the real, physical world, to prevent the first superintelligence from instantly taking over the world before there are others to counterbalance it

- (from this tweet https://twitter.com/balajis/status/1726933398374199425) The best way forward to try to solve the AI alignment model is to have multiple open sources AGIs competing against each other, while also cooperating

- By the time an AGI comes we’ll probably have private key control for everything, and those are cryptographically difficult for even an AI to break into (implicit argument: we will have encryption algorithms resistant to quantum computers, which will be difficult even for an AGI to break)

What do you think of these arguments?

Expand full comment
author

As usual, fantastic comment. I hadn't seen Balaji's tweet.

The polytheistic argument relies on the digital/analog argument. This is where we disagree. I'll write about it in my post for tomorrow. The gist of this is now, you can open companies, bank accounts, stock market accounts and so on digitally. You can make transactions. You can hack into people's accounts to take their money, send money through crypto. You can hire people digitally through remote work. You can send to China blueprints of stuff you want them to build, and they just send it to you. You can have your workers execute for you. Once you build your own factories, you are unbridled, especially if you start with multi-purpose robots—which is what this type of AGI would do. This indeed might take some time, but it might just be months. By this time, the AGI might have taken over control of the digital world, which includes nixing any competing AGI.

Crypto is only as strong at the people holding it. If social engineering still works when AGI appears, it's no barrier

Expand full comment
Nov 21, 2023Liked by Tomas Pueyo

Total Kool aid drinking waffle. Yes, an actual, real AGI would be all of those things. But that is not even remotely what Open AI or any of these other companies have. All they have is more advanced versions of something we've had for years; predictive text chat bots. Oh and a WHOLE lot of hype and marketing. They are just word calculators. There is nothing behind the curtain. Hell even potato head Elon Must managed to get one up and running with just 4 months of work (not by him of course!) - that should show you how this is nothing more than a scam. Yes, real, actual AGI will be very scary and will need to be managed very carefully, but this is not remotely it and they are nowhere even close, no matter what they tell us to boost their stock price!

Expand full comment
author

I agree! I didn't say that the current LLMs are on an imminent path to AGI.

My argument is not that. It's that the board of OpenAI might think Altman is not the best agent to optimize for AI safety, and since OpenAI might be among the best placed on the path to AGI, then the board might have considered that Altman had to be removed.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Tomas Pueyo

I've found this to be a challenge in AI discourse - concerns can be about intentions and direction, not just current capabilities. Perhaps your article falls short in communicating your argument as you've clarified it right here, because it's light on actual politics, focused mostly on mechanics (retelling the classic paperclip narrative). In terms of what the OpenAI board may or may not think (have thought), while FOOM undoubtedly explains some aspects of it, that's certainly not the full picture.

That Altman's ousting was preceded by his own attempt to oust Toner over her think tank's publication is probably also important (the impetus): https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html

That accelerationist activists, who identify with Altman, have declared war on "sustainability" and "safety", going as far as to say that biological life is unimportant to them, is probably also important (the context): https://newsletter.mollywhite.net/p/effective-obfuscation

That Altman replaced Toner with Lawrence Summers, who thoroughly represents that elitist anti-regulation agenda, is probably also important (the follow-through): https://en.wikipedia.org/wiki/Lawrence_Summers

The "alternative facts", attributing the "Q-star" AGI breakthrough as the sole impetus for the power struggle, is suspect to me, considering this claim was leaked by a Microsoft-affiliated employee to a Microsoft-affiliated news outlet shortly after Altman regained control.

That said, even if the threats are merely social & economic in nature (rather than paperclip-style existential), the conclusion is mostly the same: this power-struggle is over something deadly serious, and the outcome should be raising alarm bells. Whether Altman personally is the best agent to optimize for AI safety is putting the cart before the horse, so to speak: how is AI to be "aligned" by any of his cohort as long as techno-capital itself is severely unaligned?

Expand full comment
Nov 24, 2023Liked by Tomas Pueyo

It's exactly the kind of action we fear an AGI would do.

Expand full comment

Thank you for the dive into AGI and what could go wrong. It's refreshing after days of people cheerleading Altman and railing on the OpenAI board — it's almost as if people don't care about humanity any more.

The most challenging part about OpenAI and AGI is separating hype and reality. After months of Altman parading around his blue backpack with a "kill switch" in case of immigrant sentience, it became clear this was a distraction tactic to prevent regulation that would slow down growth. I've also hear reports that the current LLM growth curve is leveling out, which is the opposite of FOOMing. At the same time, my old boss, Blaise Agüera y Arcas (one of the heads of AI at Google), argues that AGI is already here (https://www.noemamag.com/artificial-general-intelligence-is-already-here/).

My greatest concern is that success criteria for AI is embedded from our collective history of praising ruthless domination and winner-takes-all behavior.

Expand full comment
author

Thx!

I agree that we could define today's LLMs as AGI

But then we must make a difference between AGI and ASI, because LLMs don't have recursive improvement. Here the difference is the definition of "general": We have AGI if it means "can do many tasks like a human", but not if we say "can do MOST/ALL tasks like a human".

Your fear is a potentially valid scenario IMO

Expand full comment
Nov 21, 2023Liked by Tomas Pueyo

> a Google AI engineer (the type of person you’d think are mindful of this type of problem), working on a more basic LLM

I feel that (even with the link) this part is misleading. He was a crazy guy who was precisely hired for being different and who had grifted his way into many jobs like them. I don't think he wrote any code that went into the LLM or understood it in any technical way, he was merely an evaluator. You might want to check more technical contemporary news sources for a better analysis of his credentials.

Expand full comment
author

I didn't know. Thanks!

Expand full comment

This is the best, most complete explanation of what AI is and the horrific consequences if/when somebody does it wrong and it goes full AGI. I had no understanding of what this was. Thank you ( but maybe I should have stayed in a blissful state of ignorance).

Expand full comment
author

I should have bought stock in an anxiety pills-selling company

Expand full comment

Thank you for the article Tomas.

There are a few themes I would like to read more about.

1. Experts anthropomorphizing AI in the other direction by suggesting that AI needs to be like human intelligence to be successful. Human's truly have nothing to compare their intelligence to which is other-worldly ... and now there is AI. Why exactly does AI need to be recognizable, to look like human intelligence?

2. How do humans handle chaos? We build rules. Rules actually look more like guidelines or guardrails. We do not stop our three and four year old children from out and out lying because we can't, understanding that lying is a tool to be used delicately with the utmost consideration. It takes practice to do well. It will likely leave some wreckage over a lifetime. AI will mirror some of these human mastery challenges precisely because it must master a human language early on the road to autonomy. There will be other signposts of sophistication.

3. There is something (haha, probably much more than what is known) science has yet to unravel about how nature works. Whether living or not living, organic or inorganic it seems change exhibits a progression that is not reversible. The entities that live on the Earth are one with the Earth and so this progress naturally extends to entities which aren't generally considered living, in the human sense of the word. Have we somehow engineered an entity that exists outside of that reality? Isn't AI going to assume its natural place in that progression? If so then maybe proper guardrails will be enough because that is really all we can do. We are certainly incapable of halting progress.

Thank you for your consideration Tomas.

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Tomas Pueyo

This is a good overview of why we need to take X-risk from artificial intelligence seriously, and I agree with Tomas that many dismiss it without really engaging with the best arguments (from people like Bostrom and Yudkowski et al). I wrote about this on Sképsis earlier this year: https://philipskogsberg.substack.com/p/the-genie-in-the-bottle-an-introduction

Having said that I can't really convince myself to worry too much about AI wiping out all life/human life. Perhaps this is a failure of imagination on my part, but I find the following two arguments against X-risk hypothesis the most persuasive:

1. We won’t create true AGI: Advanced, power-seeking AGIs with strong strategic awareness and reasoning may pose existential risks that are more or less impossible to fully pre-empt and counter-act. But it's either impossible to create such AGIs or such a distant possibility that we can’t do anything about it anyway.

2. If we create AGI it will not be power-seeking: Even though power-seeking and super-optimizing advanced AIs could pose an existential threat in theory, we shouldn't assume that the kind of AGI we will create will be power-seeking in the way that humans are power-seeking.

From this article: https://open.substack.com/pub/philipskogsberg/p/why-ai-probably-wont-kill-us-all

As Tomas pointed out in one of the comments, deductive arguments lead us to the conclusion that a true AGI with power-seeking behaviors will almost certainly lead to the extinction of humanity. But inductive arguments moderate that conclusion a lot, and makes it much less likely. Of course, even a very small chance of a very bad event should be taken seriously, the question is how far we should go in preventing it? My personal opinion so far is that if anything, we should continue developing AI and AI frameworks and regulate it lightly and only when clear problems have been demonstrated. Stopping the development of an AGI or regulating the industry to death is an approach that will cost us much more, and will not nevessarily prevent X-risk anyway.

Expand full comment
author

I'd agree with you, except you can't use inductive arguments in a singularity!

Expand full comment
Nov 23, 2023Liked by Tomas Pueyo

A few observations, first I think Mr. Pueyo’s article may be the best overview of the AI story I have read thus far. I appreciate that the article gives a novice, like myself, plenty of history of the development of the technology. As well as the context for what is playing out, in real time, with the different theories and opinions regarding the dangers AI may represent.

Second, I really appreciated the author’s use of example scenarios to illustrate how these dangers may manifest or what the motivations may be that would bring them to pass.

The short stock trading scenario seems both humorous and scary at the same time. But very plausible.

Last, I find I am left wondering, if the author’s concern for a real investigation into what occurred in the Open AI shuffle will ever actually come to pass. Obviously the article was written prior to the determination to reinstate Sam Altman and oust the four board members who ousted him. So any future investigation will be Sam Altman investigating himself?

Bloomberg observers are speculating that the new “board” will now be a collection of high level Tech Industry business people, perhaps even Nadel himself. Whose motivations will be far more aligned with the commercialization of the technology for profit rather than fortifying the guard rails to protect humanity from itself.

I know it seems over dramatic to say, but you must wonder if this will wind up being the “Terminator moment”. That point we all look back to, as the world crumbles around us and the machines take over. Where we say, “yup, that was the turning point. If only we had heeded the warnings.”

I must admit that I found it more than a little disturbing that Elon Musk, who, I understand, was a member of the founding governance board and early investor, but chose to leave. When Elon expresses his “concern” over an event like this, I think that we all probably should do so as well.

I did mention that this last bit was likely going to be overly dramatic? Didn’t I?

Expand full comment
author

Thank you, I agree!

The only thing that gives me solace is the fact that now the government is involved. Not sure I want a lot of government here, but some is probably good. Details in the follow-up article:

https://unchartedterritories.tomaspueyo.com/p/the-singularity-is-fear?r=36xnz&utm_campaign=post&utm_medium=web

Expand full comment
Nov 24, 2023Liked by Tomas Pueyo

Hi Thomas, will definitely check out part 2. Regarding the government now being aware and how much or how little governmental oversight is good, bad, or “other”, I find myself falling in the “other” category. Without flipping channels into a modern American political discourse conversation, I have seen nothing going on with the self-interested individuals in DC that would make me believe for a moment they are capable of taking up this cause or conversation and putting it through any type of analysis or scrutinization which might yield better safeguards, more developmental transparency or, at the very least, a better understanding of the technology and the possibilities it represents on so many levels. I think the majority of our representative government currently can’t get out of their own way with a map and Sherpa guide.

But I am trying to be optimistic? I suppose. Thanks for the reply, will definitely check out part two.

Expand full comment

Tomas. I do not think I can recall a more lively discussion as this one and it speaks volumes to your ability to delve into a developing story of this complexity and then make a right turn and explain for us readers something completely different in another sphere. I find it remarkable.

My only comment that I would add to the narrative above regards the comment that we should slow AI down. So many others have made the same argument, but I am at a loss as to how that would work. It seems to me that there is no way to slow down the work being done on AI and LLM's. Will China or Russia, Iran or North Korea "slow down"? We in the West can only hope that we are running in place with the rest of the world. If by some miracle, we are ahead or perhaps we remain ahead, great, but slowing down seems to be a fools errand.

Britain, then America later, had the largest naval fleet in the world... until they didn't. Now, it's China.

There is no slowing down when, for good or bad, you are in the race.

Expand full comment
author

Thank you!!

I don't think we should slow down AI by much at all. We should keep a fast pace. We should just invest substantially more in alignment, and maybe slow down just a little bit as we do that.

The 3-day CEO of OpenAI Emmett said going from 10 to 2-3, not 0. I think I'd say go from 10 to 8, invest much more in safety, and slow down further if we fear we might be getting closer

Expand full comment

Very informative!

Expand full comment

Don't know any of the principals in this analysis, but it does generate some thoughts....

The golem could be helpful for its rabbi maker, or become destructive if/when suitable controls were not deployed, such as removing the aleph from its forehead before the sabbath.

A mouse apprentice to a wizard conjured up a self-actuating broom to carry out cleaning tasks he disdained. Things got a bit out of hand until the wizard returned....

Lots of intelligent people (not the same as wise) have undertaken the creation of a great force to benefit mankind, unless it becomes mal-aligned.

Many fervent people can't wait to leave this troubled sphere for the greater glories of whatever heaven they trust is there for them, and may even welcome its hastening by a god-like force.

Expand full comment

Great essay and will be sharing this to non-tech friends as an intro to AI safety

One nit

> A bunch of OpenAI employees left the company several months ago to build Anthropic because they thought OpenAI was not focused enough on alignment.

Should it be a "several years ago"?

Expand full comment
author

The Internet says 2021.

I mean, technically, 2 years ago is 24 months ago so... 😅

Also, if it was created in December 2021 (can't find the month) it wouldn't be SEVERAL yearS ago!

Expand full comment

LOL nice save!

Expand full comment
Nov 23, 2023Liked by Tomas Pueyo

Don't you think even without achieving AGI, we will have such overwhelming advancements in AI enhanced weaponry, lethal autonomous weapons & drone swarms etc, that we have enough to tilt the scales where even nuclear powers (& their sacred assets) could be challenged & wreaked havoc upon by the weaker states or ideologically driven entities (terrorists) in the battlefields of the future ? Could this itself be the biggest threat if the nuclear powers are then compelled to unleash the mighty to subdue the improvisers banking on scale & speed over raw power ?

Expand full comment
author

Maybe

But fight for what though?

The source of all conflict is scarcity, and AI eliminates most of it.

Expand full comment
Nov 23, 2023Liked by Tomas Pueyo

I agree for all rational sane humans, the question is scarcity but where ideology supersedes, then AI may not used for problem solving but elimination of all opposed to the ideology.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Tomas Pueyo

Most of the AI Safety discourse that's actually influencing federal and legislative agendas right now is indeed rooted in these sorts of concerns: weaponry, surveillance, propaganda, economics. As Tomas mentions in the article, Helen Toner has been working to decelerate the US/China arms race as it relates to AI. While Yudkowski is talking about AGI, present-day and frontier AI are being appropriately scrutinized by the likes of Matthew Butterick, Timnit Gebru, Alex Hanna, Margaret Mitchell, Emily M. Bender, and others (including all the folks at https://www.stopkillerrobots.org/ ).

However, your concern that it would somehow give terrorist groups an advantage over nuclear powers is misguided. The nuclear powers can afford more & better AI than small states. If anything, AI tips the scales further in whatever direction they're already tipped.

Expand full comment
Nov 27, 2023Liked by Tomas Pueyo

Thanks for the link, Max. I want to be wrong & sure hope AI drone terrorism does not become a reality either by ideological groups or rogue states or proxies covertly armed by major powers in collusion with them for that matter. This is not a question of keeping the balance of power but one side gaining power good enough to inflict severe destruction on others even if the major powers have more on their hands for total mutual assured destruction. i dont know enough on all this and am following developments on how ukraine with western support is managing to even it with russia using drone warfare even as russia is getting such support from iran & china to do so.

Expand full comment