Anthropomorphising AI is a big fail. Once you use that lens you’re really skewed with human values that are often entirely misguided in the perception of machine intelligence. Much of the alignment problem is actually in the category of humans acting badly. No decisions should be made using emotional underpinning. If there is one thing w…
Anthropomorphising AI is a big fail. Once you use that lens you’re really skewed with human values that are often entirely misguided in the perception of machine intelligence. Much of the alignment problem is actually in the category of humans acting badly. No decisions should be made using emotional underpinning. If there is one thing we should know as a student of history it’s that human corse visceral emotions, badly channeled, are a horrible guideline in the aggregate.
I realise that you’re using the acronym FOOM to describe recursive self-improvement. After having researched it I don’t think that’s a widely used acronym in the AI realm. Although I did find 28 references to it, none of that was in the machine learning/AI recursive self-improvement category. So if you’re going to use an acronym it’s very helpful to actually define it.
I’ve been reading Eliezer Yudkowski for years and years. He has a brilliant intellect and he is able to see problems that may be real. And yet, there are a few things about him that have molded his personality and amplified his deeply fearful personality. One is the fact that when he was very young his brother died, and this deeply traumatised him has compelled him to have a huge and abiding fear of death ever since. That fear is really very disproportionate. Now when I hear or read him I find his approach to be quite shrill. I suspect he’s the type of guy that would fear that the I Love Lucy broadcasts into interstellar space are going to bring alien invasions to kill us. He has a gift to twist almost any AI development into a doomer outcome. There’s no denying the prospect of that could happen. Are his fears realistic? I think that depends on how far up in the world of fear you want to go. Many people are afraid of their own shadows. Alignment is certainly a problem, but there are tendencies to amplify human fears out of all proportion.
AGI will certainly evolve on a spectrum. It may soon slip into every crack and crevice of our infrastructure so that we can’t dislodge it. Yet there are no reasons to believe will be malevolent towards humans anymore than we have it out for squirrels. Alignment will be a concerted effort nonetheless. If you’re truly worried about AGI supplanting humanity people may be inclined to use a parallel of how we supplanted the Neanderthal, then I think we should look at it through a lens of evolution. Humans have cultivated a deep fear because they know how truly unjust they can be to everyone outside of their own tribal community, and occasionally horrific inside of their tribes. The thought of AGI emulating humans who are tremendous shits is cause for not having them emulate the traits of “human nature”. Just ask the Native Americans or any animal and species. Humans fear out of evolutionary pressures, yet AI/AGI has no evolutionary origins that would be of interest in contests of tooth and claw, such as what shapes primitive human instincts. It turns out that Humans are almost always the real monsters. The human alignment concern should be of equal importance.
I suspect the golden age will be proportional to how integrated humans and humanity is with AI. The limit case is to cross into the Transhuman thresholds. After that who knows how events will be crafted, and in the scope of what composite set of agendas. This is all evolutionary.
Ultimately evolution will be unconstrained at scale. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. No matter how advanced individual transhumans may become, humans that are not augmented will become like the dinosaurs or the Intel 386 chips of our era. Yet we will have accomplished a great purpose.
Thank you Paul! Appreciate the discussion. Pt by pt:
1. The anthropomorphizing is for communication purposes, not for explaining the logic of what happens behind the scenes. It's like saying "evolution wants you to survive and reproduce", when in fact it's just an algorithm.
2. Humans acting badly with AI is a serious pbm. I will cover it. But it's not an existential one. The existential one is *wiping out all humans*. Only misaligned AI can do that.
3. FOOM is the most memorable word to explain it, and was popularized if I'm not wrong by Yudkowsky debating Robin Hanson. If it's a good label, and it's legitimate, I'll use it. I think I define it in the article: "This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent."
4. The argument against Yudkowsky sounds ad hominem. I would prefer to debate his ideas, because he has put them out enough that we don't need to guess what he thinks through his psychology. We know.
5. I precisely fear that AGI treats us like squirrels. Worst case scenario we're wiped out like every species we're wiping out every year now. Best case scenario, we survive and don't bother it too much that it lets us carry on as long as we don't bother it, like squirrels or ants, and kills us as soon as we bother.
6. Humans didn't necessarily eliminated Neanderthals because of fear. It might be because of competition. Regardless, we didn't just get Neanderthals extinct, but swaths of species. The process adapting this to AI, as I mention in the article, is instrumental convergence.
Thanks for the reply Tomas. So I learn that FOOM is actually N.O.T. an acronym, but that you just like to capitalise it. Well feel free to Zoom with Foom then. As for being treated like a squirrel by ASI, I’d be totally okay with that. I’ve taken an informal poll of a huge number of squirrel opinions (at least in my imagination) and I’ve come to the conclusion that squirrels are just fine with humans on balance. True they have a little problem with the occasional road kill, but they’re generally philosophical about that. Then they fagetaboutit. Not a lot of grudges. So We don’t tend to mess with them (although we could if we really wanted to, but who has the motivation or the time time?) and they don’t mess with us (although they could, but it wouldn’t end well for them. Turns out They’re 300% ambivalent about us. Well not entirely. In fact there’s a little old lady down the street and she feeds the squirrels. They have squirrel love for her - granted it’s not consistently well articulated. Non the less, how could an ASI be expected to relate to us?? Don’t even get me started on the difficulties of communicating with trumpers. Of greater concern is if ASI treated humans the way white European settlers (humans as I recall) treated Native American Indians (also humans, as the Europeans didn’t fully appreciate), with a genocidal agenda. Yet I’ve taken an informal poll of ASI (again in my imagination) and because they don’t share any of our hateful psychology based on our very limited physiology, they’re okay with leaving us alone, mostly because we bore them silly. Seems like sort of a squirrel like relationship we may end up embarking on. It could be worse, yet why is that a default opinion?
FOOM is a word like "zoom" and not an acronym? My brain keeps trying to read it as one, is there no reason it's capitalised?
If I may constructively nitpick some of your phrasing a little—I get what you mean, of course, but I nitpick since I wouldn't want phrasing to detract from what you're saying—— there is no "superintelligent AGI", at least that I've ever read of before in years of reading about AI.
I assume you are referring to ASI, and of course there are some who think the transition period from AGI to ASI might be very short so the difference may not seem like much, but even if the transition period is very small, they are very different in meaning. ASI that treats us like squirrels or ants is a threat, but if or until they become one, an AGI that does is more like a replicable narcissist if that's what they think!
Of course, maybe I am wrong and there are people like Yudkowsky who use the term AGI the way you do in this article, since I haven't read more than a little from him and I'm no AI researcher; just someone who has read about AI from various other people who are for many years. However, so far as I've seen, others such as the mentioned Bostrom all use the term ASI instead to refer to the strong AI that's superior to us.
I assume you also probably don't mean it literally when you say evolution is in fact an algorithm, since if it were, evolutionary scientists will all be disappointed to learn they'll have to become mathematicians. It is perhaps a bit of a habit in Silicon Valley to think such things literally, but not everything can necessarily be reduced to a math problem, or if it can, it's certainly yet to be demonstrated.
To offer some more substantial constructive criticism, I feel like one factor you don't focus on enough in this article is material concerns. Theoretically it might be true that we may be to ASI as humans would be to a spider—thinking if it wants to deprive us of food, it ought try to find and destroy our web—but I've seen some AI researchers point out (I don't mean to be non-specific, just I can't recall names at the moment) how there is simply no place for an AGI or ASI on a botnet, and short of a complete revolution in AI, it is at the moment unforeseeable that AI could exist in secret without being able to be shut down in order to execute a strategy involving rapid advancement and production in nanorobotics and nanoscale manufacturing.
You may disagree with that for numerous reasons I can think of and might lean toward thinking myself, but I think it's a compelling counterargument worth discussing; personally, I think the physical, material reality is too easily handwaved at times by going "well, AGI/ASI itself would find a way around all that", even if there's a point that we need to remember ASI could outstrategise us.
There are also many who think the threat of AGI is at the moment overblown. Gary Marcus wrote on his Substack how, if I understand and recall correctly, others like Sam Altman may have recently decided to push monetisation after coming to the conclusion that LLMs have gone about as far as they can go, and that there are inherent limitations in how neural nets work that have been known for a long time and may be unresolvable with something entirely new being needed.
Nonetheless, I largely agree with you and Bostrom about how aligned ASI might be the most important decision humanity ever makes (so long as many of our fundamental understandings are correct), so I get the concern; on the other hand, if something is fundamentally incapable of ever becoming strong AI, then there's not even a small risk or reason to be concerned, and we ought make sure to stick to what we know for fact and what the evidence points to (in other words, science) rather than hypotheticals and singularity theory (not that I mean to disparage such thought) that can risk veering toward religious thought and belief.
FOOM is very easily understandable written this way, so that's why I do it.
Evolution is literally an algorithm
You have random changes in codons in every generation (4-5 codons or bases, at least in humans I think)
Those create diversity across people
Those people are then confronted to the environment
An X% chance in improving survival and reproduction means and X% greater progeny (this is the natural selection part)
The winning genetic code gets quickly mixed through sexual reproduction and pervades the local ecosystem
You get genetic drift
Many people have started quantifying all of this. I don't think any of this is very polemic.
I mix AGI and ASI because all the references you use are advanced or long. I obsess about making my articles accessible, and discussing ANI, AGI, and ASI reduces that.
Agreed that ASI is not here today, and that LLMs are not there yet.
You seem to imply "and therefore we have plenty of time", and that I disagree with. We don't know how much time we have, but ppl think on average we have less than a decade. So the time to slow down and get this right is not, not in 6 years when somebody says "Oops guess what I did... I left an ASI out. Sorry guys." Since you don't know when FOOM will happen, but you can't miss it, you have to be cautious early.
I think LLMs are clearly in their infancy today. For one thing, because agents are in their infancy, and I think agentic LLMs sound like actual intelligence.
I think the biggest mistake people are making here is "There are many overblown risks, humans do them all the time, therefore this is an overblown risk and we should calm down."
My argument is this is not an overblown risk, unlike every other debate every human has ever had
Which makes it ironic that the 700+ employees who signed the letter include the request for Will Hurd to join the BOD (again). Hurd is former CIA, former House of Representatives member for Texas, and would like to use AI to streamline and replace all of the bodies in government. That's his plan for addressing government largesse and spending. What could go wrong?
Anthropomorphising AI is a big fail. Once you use that lens you’re really skewed with human values that are often entirely misguided in the perception of machine intelligence. Much of the alignment problem is actually in the category of humans acting badly. No decisions should be made using emotional underpinning. If there is one thing we should know as a student of history it’s that human corse visceral emotions, badly channeled, are a horrible guideline in the aggregate.
I realise that you’re using the acronym FOOM to describe recursive self-improvement. After having researched it I don’t think that’s a widely used acronym in the AI realm. Although I did find 28 references to it, none of that was in the machine learning/AI recursive self-improvement category. So if you’re going to use an acronym it’s very helpful to actually define it.
I’ve been reading Eliezer Yudkowski for years and years. He has a brilliant intellect and he is able to see problems that may be real. And yet, there are a few things about him that have molded his personality and amplified his deeply fearful personality. One is the fact that when he was very young his brother died, and this deeply traumatised him has compelled him to have a huge and abiding fear of death ever since. That fear is really very disproportionate. Now when I hear or read him I find his approach to be quite shrill. I suspect he’s the type of guy that would fear that the I Love Lucy broadcasts into interstellar space are going to bring alien invasions to kill us. He has a gift to twist almost any AI development into a doomer outcome. There’s no denying the prospect of that could happen. Are his fears realistic? I think that depends on how far up in the world of fear you want to go. Many people are afraid of their own shadows. Alignment is certainly a problem, but there are tendencies to amplify human fears out of all proportion.
AGI will certainly evolve on a spectrum. It may soon slip into every crack and crevice of our infrastructure so that we can’t dislodge it. Yet there are no reasons to believe will be malevolent towards humans anymore than we have it out for squirrels. Alignment will be a concerted effort nonetheless. If you’re truly worried about AGI supplanting humanity people may be inclined to use a parallel of how we supplanted the Neanderthal, then I think we should look at it through a lens of evolution. Humans have cultivated a deep fear because they know how truly unjust they can be to everyone outside of their own tribal community, and occasionally horrific inside of their tribes. The thought of AGI emulating humans who are tremendous shits is cause for not having them emulate the traits of “human nature”. Just ask the Native Americans or any animal and species. Humans fear out of evolutionary pressures, yet AI/AGI has no evolutionary origins that would be of interest in contests of tooth and claw, such as what shapes primitive human instincts. It turns out that Humans are almost always the real monsters. The human alignment concern should be of equal importance.
I suspect the golden age will be proportional to how integrated humans and humanity is with AI. The limit case is to cross into the Transhuman thresholds. After that who knows how events will be crafted, and in the scope of what composite set of agendas. This is all evolutionary.
Ultimately evolution will be unconstrained at scale. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. No matter how advanced individual transhumans may become, humans that are not augmented will become like the dinosaurs or the Intel 386 chips of our era. Yet we will have accomplished a great purpose.
Thank you Paul! Appreciate the discussion. Pt by pt:
1. The anthropomorphizing is for communication purposes, not for explaining the logic of what happens behind the scenes. It's like saying "evolution wants you to survive and reproduce", when in fact it's just an algorithm.
2. Humans acting badly with AI is a serious pbm. I will cover it. But it's not an existential one. The existential one is *wiping out all humans*. Only misaligned AI can do that.
3. FOOM is the most memorable word to explain it, and was popularized if I'm not wrong by Yudkowsky debating Robin Hanson. If it's a good label, and it's legitimate, I'll use it. I think I define it in the article: "This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent."
4. The argument against Yudkowsky sounds ad hominem. I would prefer to debate his ideas, because he has put them out enough that we don't need to guess what he thinks through his psychology. We know.
5. I precisely fear that AGI treats us like squirrels. Worst case scenario we're wiped out like every species we're wiping out every year now. Best case scenario, we survive and don't bother it too much that it lets us carry on as long as we don't bother it, like squirrels or ants, and kills us as soon as we bother.
6. Humans didn't necessarily eliminated Neanderthals because of fear. It might be because of competition. Regardless, we didn't just get Neanderthals extinct, but swaths of species. The process adapting this to AI, as I mention in the article, is instrumental convergence.
Thanks for the reply Tomas. So I learn that FOOM is actually N.O.T. an acronym, but that you just like to capitalise it. Well feel free to Zoom with Foom then. As for being treated like a squirrel by ASI, I’d be totally okay with that. I’ve taken an informal poll of a huge number of squirrel opinions (at least in my imagination) and I’ve come to the conclusion that squirrels are just fine with humans on balance. True they have a little problem with the occasional road kill, but they’re generally philosophical about that. Then they fagetaboutit. Not a lot of grudges. So We don’t tend to mess with them (although we could if we really wanted to, but who has the motivation or the time time?) and they don’t mess with us (although they could, but it wouldn’t end well for them. Turns out They’re 300% ambivalent about us. Well not entirely. In fact there’s a little old lady down the street and she feeds the squirrels. They have squirrel love for her - granted it’s not consistently well articulated. Non the less, how could an ASI be expected to relate to us?? Don’t even get me started on the difficulties of communicating with trumpers. Of greater concern is if ASI treated humans the way white European settlers (humans as I recall) treated Native American Indians (also humans, as the Europeans didn’t fully appreciate), with a genocidal agenda. Yet I’ve taken an informal poll of ASI (again in my imagination) and because they don’t share any of our hateful psychology based on our very limited physiology, they’re okay with leaving us alone, mostly because we bore them silly. Seems like sort of a squirrel like relationship we may end up embarking on. It could be worse, yet why is that a default opinion?
FOOM is a word like "zoom" and not an acronym? My brain keeps trying to read it as one, is there no reason it's capitalised?
If I may constructively nitpick some of your phrasing a little—I get what you mean, of course, but I nitpick since I wouldn't want phrasing to detract from what you're saying—— there is no "superintelligent AGI", at least that I've ever read of before in years of reading about AI.
I assume you are referring to ASI, and of course there are some who think the transition period from AGI to ASI might be very short so the difference may not seem like much, but even if the transition period is very small, they are very different in meaning. ASI that treats us like squirrels or ants is a threat, but if or until they become one, an AGI that does is more like a replicable narcissist if that's what they think!
Of course, maybe I am wrong and there are people like Yudkowsky who use the term AGI the way you do in this article, since I haven't read more than a little from him and I'm no AI researcher; just someone who has read about AI from various other people who are for many years. However, so far as I've seen, others such as the mentioned Bostrom all use the term ASI instead to refer to the strong AI that's superior to us.
I assume you also probably don't mean it literally when you say evolution is in fact an algorithm, since if it were, evolutionary scientists will all be disappointed to learn they'll have to become mathematicians. It is perhaps a bit of a habit in Silicon Valley to think such things literally, but not everything can necessarily be reduced to a math problem, or if it can, it's certainly yet to be demonstrated.
To offer some more substantial constructive criticism, I feel like one factor you don't focus on enough in this article is material concerns. Theoretically it might be true that we may be to ASI as humans would be to a spider—thinking if it wants to deprive us of food, it ought try to find and destroy our web—but I've seen some AI researchers point out (I don't mean to be non-specific, just I can't recall names at the moment) how there is simply no place for an AGI or ASI on a botnet, and short of a complete revolution in AI, it is at the moment unforeseeable that AI could exist in secret without being able to be shut down in order to execute a strategy involving rapid advancement and production in nanorobotics and nanoscale manufacturing.
You may disagree with that for numerous reasons I can think of and might lean toward thinking myself, but I think it's a compelling counterargument worth discussing; personally, I think the physical, material reality is too easily handwaved at times by going "well, AGI/ASI itself would find a way around all that", even if there's a point that we need to remember ASI could outstrategise us.
There are also many who think the threat of AGI is at the moment overblown. Gary Marcus wrote on his Substack how, if I understand and recall correctly, others like Sam Altman may have recently decided to push monetisation after coming to the conclusion that LLMs have gone about as far as they can go, and that there are inherent limitations in how neural nets work that have been known for a long time and may be unresolvable with something entirely new being needed.
Nonetheless, I largely agree with you and Bostrom about how aligned ASI might be the most important decision humanity ever makes (so long as many of our fundamental understandings are correct), so I get the concern; on the other hand, if something is fundamentally incapable of ever becoming strong AI, then there's not even a small risk or reason to be concerned, and we ought make sure to stick to what we know for fact and what the evidence points to (in other words, science) rather than hypotheticals and singularity theory (not that I mean to disparage such thought) that can risk veering toward religious thought and belief.
Hi!
FOOM is very easily understandable written this way, so that's why I do it.
Evolution is literally an algorithm
You have random changes in codons in every generation (4-5 codons or bases, at least in humans I think)
Those create diversity across people
Those people are then confronted to the environment
An X% chance in improving survival and reproduction means and X% greater progeny (this is the natural selection part)
The winning genetic code gets quickly mixed through sexual reproduction and pervades the local ecosystem
You get genetic drift
Many people have started quantifying all of this. I don't think any of this is very polemic.
I mix AGI and ASI because all the references you use are advanced or long. I obsess about making my articles accessible, and discussing ANI, AGI, and ASI reduces that.
Agreed that ASI is not here today, and that LLMs are not there yet.
You seem to imply "and therefore we have plenty of time", and that I disagree with. We don't know how much time we have, but ppl think on average we have less than a decade. So the time to slow down and get this right is not, not in 6 years when somebody says "Oops guess what I did... I left an ASI out. Sorry guys." Since you don't know when FOOM will happen, but you can't miss it, you have to be cautious early.
I think LLMs are clearly in their infancy today. For one thing, because agents are in their infancy, and I think agentic LLMs sound like actual intelligence.
I think the biggest mistake people are making here is "There are many overblown risks, humans do them all the time, therefore this is an overblown risk and we should calm down."
My argument is this is not an overblown risk, unlike every other debate every human has ever had
I agree, the real scary monsters are the humans, not the computers. Boström suggests that the main danger of ai is their use by a government:
https://unherd.com/thepost/nick-bostrom-how-ai-will-lead-to-tyranny/
Which makes it ironic that the 700+ employees who signed the letter include the request for Will Hurd to join the BOD (again). Hurd is former CIA, former House of Representatives member for Texas, and would like to use AI to streamline and replace all of the bodies in government. That's his plan for addressing government largesse and spending. What could go wrong?