The 'test loss' series of graphs say that more data is needed to get less mistakes. Today's models are basically trained on the whole internet. Any further data will be marginal (lower-quality). Generally speaking, we have reached the limit of high-quality data to feed the bots, and a large part of the internet right now is already LLM-generated. So we might run out of high-quality data before. Indeed there can be deliberate 'infection' of training data to 'propagandize' LLMs - (https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/).
The only other source of real data is reality itself, ie real-world data captured by robots in real-world environments, but that will probably require closer to a decade.
Exponential increases are exponential until they are not.... if there is an upper bound in AI intelligence, it results in an s-curve. I'm not claiming necessarily that there IS an upper bound, or where it could be - I don't think humanity even understands intelligence that much to be able to. I AM saying that all these hyperscalers seem to be assuming that there is no such upper bound (or else they are at least presenting it that way to investors)
Data is one of the potential limiting factors. It’s not the only one. Electricity is arguably a bigger one. I’ll tackle them in other articles.
There are at least two options to eliminate this data limit that I can think of off the top of my head. One is synthetic data. The other is better training. The intuition for them is easy: human experts don’t read the entire internet to develop expertise.
Good and interesting points. Re your comment on our not understanding 'intelligence.' One lacuna that seems to be near-universal among AI researchers is not to realize that our intelligence is biological, that our brains are not in a glass jar in a lab, and that that metaphor of AI being like a brain's intelligence is severely limited....Our brains are an integral part of a bodywide system, and arise out of the evolution of a creature who has needs, desires, emotions. And therefore the neurochemical system of which the brain is a part can't be thought of as separate (either as a metaphor for computers or in themselves) from that body in its environment without hugely distorting and misunderstanding it. Obvious, I guess, but it seems often forgotten.
"Our brains are an integral part of a bodywide system..." brings to mind Rodney Brooks' 1990 paper "Elephants Don't Play Chess". He went on to bring us the Roomba, without a football field of GPUs guiding it.
"When we reach that point, AIs will start taking over full jobs, accelerate the economy, create abundance where there was scarcity, and change society as we know it."
Whenever I hear about how 'we' are going to have this or that benefit for 'us', I think of that joke about the Lone Ranger and Tonto. They're surrounded by ten thousand Lakota and Cheyenne warriors. The Lone Ranger says: "Well, Tonto, it looks like we've reached the end of the trail. We're going to die." Tonto says: "Who's this 'we', white man?"
So who is the 'we'? Who gets the abundance? Who benefits from that changed society? Certainly, the owners of the systems will -- even those of us who own a small piece of those companies. But those who don't? What happens to them when their jobs disappear? Perhaps it's like horse jobs turning into car jobs 100 years ago -- but what if it's not?
Some jobs will be transformed. Others will be eliminated. AI has the potential to give us all a life of ease (a la Star Trek, where all basic needs are met and money is unnecessary, freeing us to pursue exploration, art, or whatever). But there is no sure pathway to that outcome. It could be very dark indeed. It's definitely not the equivalent of carriage makers becoming car makers. This is a difference of kind, not degree.
Technically we already have the potential to provide more than enough food for everyone on Earth, yet hunger still exists. The problem is not with AI's potential not being enough, the problem is that it's very unclear how all that potential would benefit everyone, rather than just a tiny group of super-rich robot factory owners.
There are two interesting mechanisms here, and I have trouble figuring out how they interact. One is how much AI owners need other humans to buy their cheap robot goods, the other is how cheap AI goods affect people's purchasing power.
In the past the way how the benefits of technological progress got distributed, depended on people being able to make some income, and then spend it on goods that are now cheaper thanks to new tech. We already had the problem that even a 90% price drop isn't much help when you lose 100% of your income (eg. because your job is automated away), but this only affected a small percentage of the population. We have workarounds (unemployed benefits), but in their current form I don't think they could handle even just 20% of the population being affected.
The availability of cheap, AI-provided goods and services can actively _hurt_ some people, those whose income depended on the old ways of production. If this scales up to most of the economy, then we have this weird paradox, where it's exactly the availability of cheap goods that makes everyone poor, as now nobody has anything to sell (the robots do it all better and cheaper).
(We can't even say that the 99% who don't own robot-factories will just replicate the pre-AI economy. We couldn't replicate it, because whoever finds a bit of money would spend it on cheaper robot goods. Still no one would buy human-built stuff, therefore no one could sell, so no one would have income. It's funny, because such an economy _could_ work, we see it working today -- but the wide availability of cheap robot goods makes it dysfunctional. To some degree I think this is already happening.)
It all seems to point to UBI. Except that the only way to fund UBI would be high taxes on AI owners, but they wouldn't like the idea, and they are pretty good at getting politicians do what they want. In general, for AI automation to be universally helpful, we'd need to rebuild our distribution systems, pretty much from the ground up. I think _designing_ a system that would work isn't all that hard, but introducing it in practice... I don't see who would be able to organize that transition, and quickly enough, against the billionaires' will.
And then about AI owners needing someone to sell their products to...
In the past even the richest couldn't produce everything in-house. They needed to buy some stuff (products as well as labour), and they needed to sell stuff in order to be able to do so. Once they have robots that build everything for them and do all the work for them... not so much.
Even if they don't aim to do so, their robot factories can suck up all the purchasing power the rest of us have left. But once we're dry, they'd still be fine -- the robots can now give them everything they need, practically for free.
If you have enough robots, you don't even really need money, you don't need to be able to sell anything. If you don't have enough robots, _then_ you'd need money, then you'd need to be able to sell something -- but you're in competition with robots, so why would anyone buy from you.
I'm afraid that something like this is the worst case scenario: AI owners doing fine even in a broken economy, so they don't need it fixed, but they still flood the market with cheap robot goods, preventing a parallel economy from working (and practically also preventing others from rising into the ranks of AI owners). Not even because they're evil, it's just the default thing to keep the AI factories running. Maybe they'd even think that offering cheap goods is helping the world.
I have been thinking along similar lines, but I would like to point out that goods and services produced by robots are unlikely to be cheap. Goods require raw materials that often need to be imported from other parts of the world. You need to 'feed' the robots with power and keep them maintained. In an increasingly energy-hungry world, that power is unlikely to be cheap.
What we have an abundance of currently in the world is humans. Human labour may well end up cheaper than robot labour for one-off or complex tasks. An everday task like folding washing takes a robot much longer than a human, if they can do it at all.
I think a UBI will be necessary in developed countries in the future and, yes, the billionaires will resist it. I see a lot of similarities today to the situation before the French Revolution. A country in debt and a wealthy elite partying while the people starved. The billionaires will probably manage to keep their heads. They can fly off in their private jets to their secret bunkers, but their influence will disappear.
Are new clothes from the factory not currently folded by machines? My impression is they're mostly still sewn by humans (operating sewing machines), but folding is easier than sewing.
I used to think so, too, especially sheets, until I saw footage of a robot attempting to fold a handkerchief. It was painfully slow. Folding a fitted sheet is one of the most complex tasks undertaken by humans and as far as I know, it is always done by humans. Same with ironing.
"The owners don't get rich if nobody can pay for their products." After the Second World War, there was the greatest expansion of wealth in history (until China's boom), driven by middle class consumption -- it was great for capitalism, but it wasn't designed by capitalists. If it hadn't been for strong unions, capitalists would have had much fewer sales because their workers wouldn't have had the income. But the capitalists fought it all the way. So it makes logical sense that the benefits be distributed. But I don't see the lords of digi-capital's being inclined to spread that wealth without being pushed. Elon is on track to a trillion.
Your articles on geography often bring up interesting, highly informative observations. This one strikes me more as vapid boosterism. I’ve been reading about LLMs for a few years now and I’ve yet to read a compelling reason why anything genuinely transformative will come out of these systems, once the mania has cooled. Consider:
1. LLMs (and AI more generally) don’t come with any clear social accountability mechanisms. If you screw up at your job, you can be fired or even prosecuted. OpenAI doesn’t want any fiduciary or criminal exposure, despite marketing which implies they will be taking over jobs or corporate functions that have these liabilities in one form or another. They’re already defending cases of people who committed suicide and were in some prompted by ChatGPT, and their legal theory will be that they bear no responsibility for the babbling of an insentient robot when prompted by a sick individual. The persistence of hallucinations means that chatbots are pretty useless for any result that requires trust.
2. Every time I hear someone describe something they’ve done with ChatGPT, it is an activity that has low marginal value or it helps to create a product that has low marginal value (“slop”). You could make the argument that workers often have to do stuff that has low marginal value in their jobs, so this will increase their productivity, but I’ve yet to read anything that actually demonstrates that this is happening at a broad scale across the economy. Anecdotally, I work in the construction industry, and I can’t think of anything important in my job where a ChatGPT output would make me more productive.
3. Most jobs aren’t “knowledge” jobs where the main job function is to produce ChatGPT-like outputs to open-ended questions. If you’ve only ever worked as an Academic and now make your money writing articles on Substack, it might be hard to understand this, but most of the economy isn’t structured, functionally, like a ChatGPT prompt.
4. Machine learning applications that are tailored for very specific functional tasks are not new and they usually don’t require the insane amount of specialized, centralized data centers that are being devoted to LLMs.
5. There is SO MUCH MONEY going into this, and I don’t see how they will ever turn a profit. It’s not just the capex: these data centers consume a huge amount of electricity just to operate, and, from what I’ve read, the computers degrade and fail at a rapid clip relative to the scale of the initial investment (5-10 years). If these companies can’t make any money, either they are going to drag down the profitability of the tech giants (they’re not generating more ad revenue), or they’re going to go bust.
6. I see a lot of potential for fascist governments to use LLMs to surveil, impoverish and brutalize their citizens. Fascists ultimately don’t *want* things to work well for normal folks, so the hallucination problem may not be a problem at all. The best bet for OpenAI financially would be to ally itself with a MAGA government who has successfully turned the US into Russia or North Korea. I’m not particularly keen on a technology whose ideal partner institutions belong to fascist dictatorships.
But you would admit that the heads of these companies have an incentive in being optimistic about how soon this will happen, right? And that Musk has tweeted presumably thousands of false things at this point
If this is true, why does Meta keep offering multi-million dollar salaries to poach AI researchers? Surely they must know they will be redundant before their employment contracts expire?
Also, why does Elon Musk demand a trillion dollars to meet targets which he claims can be met within about two years by AGI/Super-intelligence? Why aren’t his investors demanding that he fires himself to meet his fiduciary obligations to them?
Sorry – this just looks like incoherent nonsense to me.
Sometimes I think that if an alien watched a child growing up, it would think "look how quickly he is learning the whole human knowledge how that he is 16, by the time he is 30 his capabilities will have grown exponentially and he will be more intelligent than the whole of humankind combined".
All I know is that the utility of LLMs really tends to break down whenever I apply one to a task that’s on the edge of its training distribution.
So for helping me with coding and debugging problems in code, it’s excellent for common tasks. But I’m doing research and so I’m trying to do things for which there are not many examples out there on the internet.
Here it reliably breaks down. I run into some problem and it tends to go through this mode where it hallucinates pseudo-solutions, claiming to have the answer each time the last attempt is proven to be incorrect.
These models are excellent and exhibit some real intelligence in domains where they have lots of training data, but I question if this is the approach that will get us to the level of systems making novel contributions, which the assumption of self improving models relies on.
Doing a quick AI search, I'm reminded that practical and widespread use of quantum computing is still 5-20 away and that event horizon has remained consistently in the "nearly there" category for a remarkably long time.
AI also says, we are years, possibly decades, away from fully autonomous (Level 5) self-driving cars being a common reality. Both of these breakthrough were widely thought, by the market and researchers to be achieved by this date. Anytime the word "God" is bandied about, we must make certain hubris is nearby.
"The data issue" will remain even if AI can comb through it with great speed because, as my father used to remind me, "You can't make chicken salad from chicken shit," and I would add here, “even if you do it very quickly from all the world’s collected chicken shit.”
I do, however, understand that there are many programming areas, like genome research, chemistry, medicine, weather prediction, traffic management where AI will shine.
Tomas, I would like read an article on the "intense preparations" one should be making for the arrival of AGI (especially as a parent of three high schoolers).
I like a lot your texts on broader stuff, but I think that, on AGI, you're "naive" in thinking that it will necessarily be good for us. Lots of unemployed people for boosting the ego of Sam Altman is not a good deal, tbh.
Don’t know that we will ever make God, if there is such a monotheistic deity, but yes AI/AGI/ASI call it what you will, it will be a great danger to humanity at large, why, because to err is human; but to really screw things up big time requires a computer, and the bigger the computer the bigger the f😱ck up.
But seriously AI/AGI/ASI whatever, it’s only the product of noughts and ones, it’s got the same powers as fiat currency, or electricity, outside of that it has no self creating physical attributes, it’s just a massive energy consuming pattern recognition data bank exchanging noughts and ones within itself.
Coming from an engineering background when we had a new machine or process to build it started off with some new idea or was from existing designs copied and developed. All production engineers will know this, no sooner is the design in development than amendments and improvements start, some based on engineering inputs others to meet clients wishes, but you have to draw a line and say this is what we are going to produce be it a prototype or the end product or process, by this date, and rest upgrades as required.
That’s a very simplistic description of the process. The speed at which products or process are manufactured and constructed is dependent upon the speed at which humans or machines can physically operate in the real world, not inside some powerful thinking machine. If thoughts could be turned out a million times faster than a product or process designer it comes up against physical restraints along with scale and complexity.
So only in sci-fi does it become scary, a creative hallucination, and boy have we got some extremely rich sci-fi nerds trying to turn their childhood dreams into reality, so let’s just get real it’s BUBBLE waiting for reality to pop it. And of course there’s an elephant in the room called FREE FINITE Flammable Fossils, which near everyone ignores, as no one wants to contemplate a time when they can no longer ignore it, it being FINITE🤔
I don't see AGI until one massive problem is solved: current models can NOT learn on the fly, they need to be trained first and only know data from their training.
When you have a conversation with them, in a new conversation they will not know about the first one unless you literally send the messages from it.
Really enjoyed this, thank you, but I have quite a significant caveat (which you may have covered in something I've missed, sorry). The language is of 'intelligence' but it is more precisely 'cognitive reasoning capacity', ie the ability to follow logical steps. I have no doubt that AI will surpass humans on that, and soon - but that is only one aspect of human intelligence, not the sum. It's captured in the distinction between someone being intelligent and just being clever (but stupid) - AI will be immensely clever, but it will still follow the paths laid down by intelligent human beings. In just the same way that a forklift truck is stronger than any human, but still requires a human driver, so too AI will be stronger in cognitive reasoning than any human - but it will always (on this pathway) be dependent upon a human intelligence to guide it. Just my two pennies.
The 'test loss' series of graphs say that more data is needed to get less mistakes. Today's models are basically trained on the whole internet. Any further data will be marginal (lower-quality). Generally speaking, we have reached the limit of high-quality data to feed the bots, and a large part of the internet right now is already LLM-generated. So we might run out of high-quality data before. Indeed there can be deliberate 'infection' of training data to 'propagandize' LLMs - (https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/).
The only other source of real data is reality itself, ie real-world data captured by robots in real-world environments, but that will probably require closer to a decade.
Exponential increases are exponential until they are not.... if there is an upper bound in AI intelligence, it results in an s-curve. I'm not claiming necessarily that there IS an upper bound, or where it could be - I don't think humanity even understands intelligence that much to be able to. I AM saying that all these hyperscalers seem to be assuming that there is no such upper bound (or else they are at least presenting it that way to investors)
Data is one of the potential limiting factors. It’s not the only one. Electricity is arguably a bigger one. I’ll tackle them in other articles.
There are at least two options to eliminate this data limit that I can think of off the top of my head. One is synthetic data. The other is better training. The intuition for them is easy: human experts don’t read the entire internet to develop expertise.
Good and interesting points. Re your comment on our not understanding 'intelligence.' One lacuna that seems to be near-universal among AI researchers is not to realize that our intelligence is biological, that our brains are not in a glass jar in a lab, and that that metaphor of AI being like a brain's intelligence is severely limited....Our brains are an integral part of a bodywide system, and arise out of the evolution of a creature who has needs, desires, emotions. And therefore the neurochemical system of which the brain is a part can't be thought of as separate (either as a metaphor for computers or in themselves) from that body in its environment without hugely distorting and misunderstanding it. Obvious, I guess, but it seems often forgotten.
"Our brains are an integral part of a bodywide system..." brings to mind Rodney Brooks' 1990 paper "Elephants Don't Play Chess". He went on to bring us the Roomba, without a football field of GPUs guiding it.
Thanks, as always.
"When we reach that point, AIs will start taking over full jobs, accelerate the economy, create abundance where there was scarcity, and change society as we know it."
Whenever I hear about how 'we' are going to have this or that benefit for 'us', I think of that joke about the Lone Ranger and Tonto. They're surrounded by ten thousand Lakota and Cheyenne warriors. The Lone Ranger says: "Well, Tonto, it looks like we've reached the end of the trail. We're going to die." Tonto says: "Who's this 'we', white man?"
So who is the 'we'? Who gets the abundance? Who benefits from that changed society? Certainly, the owners of the systems will -- even those of us who own a small piece of those companies. But those who don't? What happens to them when their jobs disappear? Perhaps it's like horse jobs turning into car jobs 100 years ago -- but what if it's not?
Some jobs will be transformed. Others will be eliminated. AI has the potential to give us all a life of ease (a la Star Trek, where all basic needs are met and money is unnecessary, freeing us to pursue exploration, art, or whatever). But there is no sure pathway to that outcome. It could be very dark indeed. It's definitely not the equivalent of carriage makers becoming car makers. This is a difference of kind, not degree.
Technically we already have the potential to provide more than enough food for everyone on Earth, yet hunger still exists. The problem is not with AI's potential not being enough, the problem is that it's very unclear how all that potential would benefit everyone, rather than just a tiny group of super-rich robot factory owners.
That’s a very good observation. I’ll ponder it.
How did The United Federation of Planets handle it? That should be our guide.
The owners don’t get rich if nobody can pay their products.
The way to think about is: who gets a claim on scarce resources? Because that’s what money is.
The answer to this question changes dramatically when the scarcity of resources shrink dramatically.
There are two interesting mechanisms here, and I have trouble figuring out how they interact. One is how much AI owners need other humans to buy their cheap robot goods, the other is how cheap AI goods affect people's purchasing power.
In the past the way how the benefits of technological progress got distributed, depended on people being able to make some income, and then spend it on goods that are now cheaper thanks to new tech. We already had the problem that even a 90% price drop isn't much help when you lose 100% of your income (eg. because your job is automated away), but this only affected a small percentage of the population. We have workarounds (unemployed benefits), but in their current form I don't think they could handle even just 20% of the population being affected.
The availability of cheap, AI-provided goods and services can actively _hurt_ some people, those whose income depended on the old ways of production. If this scales up to most of the economy, then we have this weird paradox, where it's exactly the availability of cheap goods that makes everyone poor, as now nobody has anything to sell (the robots do it all better and cheaper).
(We can't even say that the 99% who don't own robot-factories will just replicate the pre-AI economy. We couldn't replicate it, because whoever finds a bit of money would spend it on cheaper robot goods. Still no one would buy human-built stuff, therefore no one could sell, so no one would have income. It's funny, because such an economy _could_ work, we see it working today -- but the wide availability of cheap robot goods makes it dysfunctional. To some degree I think this is already happening.)
It all seems to point to UBI. Except that the only way to fund UBI would be high taxes on AI owners, but they wouldn't like the idea, and they are pretty good at getting politicians do what they want. In general, for AI automation to be universally helpful, we'd need to rebuild our distribution systems, pretty much from the ground up. I think _designing_ a system that would work isn't all that hard, but introducing it in practice... I don't see who would be able to organize that transition, and quickly enough, against the billionaires' will.
And then about AI owners needing someone to sell their products to...
In the past even the richest couldn't produce everything in-house. They needed to buy some stuff (products as well as labour), and they needed to sell stuff in order to be able to do so. Once they have robots that build everything for them and do all the work for them... not so much.
Even if they don't aim to do so, their robot factories can suck up all the purchasing power the rest of us have left. But once we're dry, they'd still be fine -- the robots can now give them everything they need, practically for free.
If you have enough robots, you don't even really need money, you don't need to be able to sell anything. If you don't have enough robots, _then_ you'd need money, then you'd need to be able to sell something -- but you're in competition with robots, so why would anyone buy from you.
I'm afraid that something like this is the worst case scenario: AI owners doing fine even in a broken economy, so they don't need it fixed, but they still flood the market with cheap robot goods, preventing a parallel economy from working (and practically also preventing others from rising into the ranks of AI owners). Not even because they're evil, it's just the default thing to keep the AI factories running. Maybe they'd even think that offering cheap goods is helping the world.
I have been thinking along similar lines, but I would like to point out that goods and services produced by robots are unlikely to be cheap. Goods require raw materials that often need to be imported from other parts of the world. You need to 'feed' the robots with power and keep them maintained. In an increasingly energy-hungry world, that power is unlikely to be cheap.
What we have an abundance of currently in the world is humans. Human labour may well end up cheaper than robot labour for one-off or complex tasks. An everday task like folding washing takes a robot much longer than a human, if they can do it at all.
I think a UBI will be necessary in developed countries in the future and, yes, the billionaires will resist it. I see a lot of similarities today to the situation before the French Revolution. A country in debt and a wealthy elite partying while the people starved. The billionaires will probably manage to keep their heads. They can fly off in their private jets to their secret bunkers, but their influence will disappear.
Are new clothes from the factory not currently folded by machines? My impression is they're mostly still sewn by humans (operating sewing machines), but folding is easier than sewing.
I used to think so, too, especially sheets, until I saw footage of a robot attempting to fold a handkerchief. It was painfully slow. Folding a fitted sheet is one of the most complex tasks undertaken by humans and as far as I know, it is always done by humans. Same with ironing.
"The owners don't get rich if nobody can pay for their products." After the Second World War, there was the greatest expansion of wealth in history (until China's boom), driven by middle class consumption -- it was great for capitalism, but it wasn't designed by capitalists. If it hadn't been for strong unions, capitalists would have had much fewer sales because their workers wouldn't have had the income. But the capitalists fought it all the way. So it makes logical sense that the benefits be distributed. But I don't see the lords of digi-capital's being inclined to spread that wealth without being pushed. Elon is on track to a trillion.
Good question.
Your articles on geography often bring up interesting, highly informative observations. This one strikes me more as vapid boosterism. I’ve been reading about LLMs for a few years now and I’ve yet to read a compelling reason why anything genuinely transformative will come out of these systems, once the mania has cooled. Consider:
1. LLMs (and AI more generally) don’t come with any clear social accountability mechanisms. If you screw up at your job, you can be fired or even prosecuted. OpenAI doesn’t want any fiduciary or criminal exposure, despite marketing which implies they will be taking over jobs or corporate functions that have these liabilities in one form or another. They’re already defending cases of people who committed suicide and were in some prompted by ChatGPT, and their legal theory will be that they bear no responsibility for the babbling of an insentient robot when prompted by a sick individual. The persistence of hallucinations means that chatbots are pretty useless for any result that requires trust.
2. Every time I hear someone describe something they’ve done with ChatGPT, it is an activity that has low marginal value or it helps to create a product that has low marginal value (“slop”). You could make the argument that workers often have to do stuff that has low marginal value in their jobs, so this will increase their productivity, but I’ve yet to read anything that actually demonstrates that this is happening at a broad scale across the economy. Anecdotally, I work in the construction industry, and I can’t think of anything important in my job where a ChatGPT output would make me more productive.
3. Most jobs aren’t “knowledge” jobs where the main job function is to produce ChatGPT-like outputs to open-ended questions. If you’ve only ever worked as an Academic and now make your money writing articles on Substack, it might be hard to understand this, but most of the economy isn’t structured, functionally, like a ChatGPT prompt.
4. Machine learning applications that are tailored for very specific functional tasks are not new and they usually don’t require the insane amount of specialized, centralized data centers that are being devoted to LLMs.
5. There is SO MUCH MONEY going into this, and I don’t see how they will ever turn a profit. It’s not just the capex: these data centers consume a huge amount of electricity just to operate, and, from what I’ve read, the computers degrade and fail at a rapid clip relative to the scale of the initial investment (5-10 years). If these companies can’t make any money, either they are going to drag down the profitability of the tech giants (they’re not generating more ad revenue), or they’re going to go bust.
6. I see a lot of potential for fascist governments to use LLMs to surveil, impoverish and brutalize their citizens. Fascists ultimately don’t *want* things to work well for normal folks, so the hallucination problem may not be a problem at all. The best bet for OpenAI financially would be to ally itself with a MAGA government who has successfully turned the US into Russia or North Korea. I’m not particularly keen on a technology whose ideal partner institutions belong to fascist dictatorships.
But you would admit that the heads of these companies have an incentive in being optimistic about how soon this will happen, right? And that Musk has tweeted presumably thousands of false things at this point
If this is true, why does Meta keep offering multi-million dollar salaries to poach AI researchers? Surely they must know they will be redundant before their employment contracts expire?
Also, why does Elon Musk demand a trillion dollars to meet targets which he claims can be met within about two years by AGI/Super-intelligence? Why aren’t his investors demanding that he fires himself to meet his fiduciary obligations to them?
Sorry – this just looks like incoherent nonsense to me.
Sometimes I think that if an alien watched a child growing up, it would think "look how quickly he is learning the whole human knowledge how that he is 16, by the time he is 30 his capabilities will have grown exponentially and he will be more intelligent than the whole of humankind combined".
All I know is that the utility of LLMs really tends to break down whenever I apply one to a task that’s on the edge of its training distribution.
So for helping me with coding and debugging problems in code, it’s excellent for common tasks. But I’m doing research and so I’m trying to do things for which there are not many examples out there on the internet.
Here it reliably breaks down. I run into some problem and it tends to go through this mode where it hallucinates pseudo-solutions, claiming to have the answer each time the last attempt is proven to be incorrect.
These models are excellent and exhibit some real intelligence in domains where they have lots of training data, but I question if this is the approach that will get us to the level of systems making novel contributions, which the assumption of self improving models relies on.
Doing a quick AI search, I'm reminded that practical and widespread use of quantum computing is still 5-20 away and that event horizon has remained consistently in the "nearly there" category for a remarkably long time.
AI also says, we are years, possibly decades, away from fully autonomous (Level 5) self-driving cars being a common reality. Both of these breakthrough were widely thought, by the market and researchers to be achieved by this date. Anytime the word "God" is bandied about, we must make certain hubris is nearby.
"The data issue" will remain even if AI can comb through it with great speed because, as my father used to remind me, "You can't make chicken salad from chicken shit," and I would add here, “even if you do it very quickly from all the world’s collected chicken shit.”
I do, however, understand that there are many programming areas, like genome research, chemistry, medicine, weather prediction, traffic management where AI will shine.
Tomas, I would like read an article on the "intense preparations" one should be making for the arrival of AGI (especially as a parent of three high schoolers).
I like a lot your texts on broader stuff, but I think that, on AGI, you're "naive" in thinking that it will necessarily be good for us. Lots of unemployed people for boosting the ego of Sam Altman is not a good deal, tbh.
Exactly. The AI race isn’t about intelligence. It’s about faith disguised as funding.
Hyperscalers don’t believe they’re coding God, they’re coding certainty.
And that’s what every empire has always chased.
Imagine not only *trying* to create something smarter than humans, but deluding yourself that this is a good thing.
Ask yourself what becomes of humans when you create something better than humans. Hint: think of how many ants die underfoot per year.
Horrible. Anyone involved should be imprisoned and their careers destroyed. But it's too late for that. All we can do is cower and hope for the best.
Don’t know that we will ever make God, if there is such a monotheistic deity, but yes AI/AGI/ASI call it what you will, it will be a great danger to humanity at large, why, because to err is human; but to really screw things up big time requires a computer, and the bigger the computer the bigger the f😱ck up.
But seriously AI/AGI/ASI whatever, it’s only the product of noughts and ones, it’s got the same powers as fiat currency, or electricity, outside of that it has no self creating physical attributes, it’s just a massive energy consuming pattern recognition data bank exchanging noughts and ones within itself.
Coming from an engineering background when we had a new machine or process to build it started off with some new idea or was from existing designs copied and developed. All production engineers will know this, no sooner is the design in development than amendments and improvements start, some based on engineering inputs others to meet clients wishes, but you have to draw a line and say this is what we are going to produce be it a prototype or the end product or process, by this date, and rest upgrades as required.
That’s a very simplistic description of the process. The speed at which products or process are manufactured and constructed is dependent upon the speed at which humans or machines can physically operate in the real world, not inside some powerful thinking machine. If thoughts could be turned out a million times faster than a product or process designer it comes up against physical restraints along with scale and complexity.
So only in sci-fi does it become scary, a creative hallucination, and boy have we got some extremely rich sci-fi nerds trying to turn their childhood dreams into reality, so let’s just get real it’s BUBBLE waiting for reality to pop it. And of course there’s an elephant in the room called FREE FINITE Flammable Fossils, which near everyone ignores, as no one wants to contemplate a time when they can no longer ignore it, it being FINITE🤔
E&OE
AlphaChip mentioned here sounds like AI accelerating AI researchers. https://blog.google/products/google-cloud/ironwood-google-tpu-things-to-know/
I don't see AGI until one massive problem is solved: current models can NOT learn on the fly, they need to be trained first and only know data from their training.
When you have a conversation with them, in a new conversation they will not know about the first one unless you literally send the messages from it.
I feel liks this is not discussed enough.
Really enjoyed this, thank you, but I have quite a significant caveat (which you may have covered in something I've missed, sorry). The language is of 'intelligence' but it is more precisely 'cognitive reasoning capacity', ie the ability to follow logical steps. I have no doubt that AI will surpass humans on that, and soon - but that is only one aspect of human intelligence, not the sum. It's captured in the distinction between someone being intelligent and just being clever (but stupid) - AI will be immensely clever, but it will still follow the paths laid down by intelligent human beings. In just the same way that a forklift truck is stronger than any human, but still requires a human driver, so too AI will be stronger in cognitive reasoning than any human - but it will always (on this pathway) be dependent upon a human intelligence to guide it. Just my two pennies.