My son and I talk about this. He is 40 and formerly an IT project manager, but he is now permanently disabled due to the surgeries and treatment for Osteosarcoma. Anyway, due to his chronic pain, inability to drive, or walk, he can no longer work, which makes him feel unproductive.
Those who lose jobs will do what? The vast majority want to be productive. But without income, then what? Who would fund the needs of those people?
Corporations and the hose with paying jobs will. Taxes will still have to be collected. If Corporations don't pay their fair share, no one buys the goods and services.
Our idea is those unemployed due to job takeover and no chance at reemployment are paid the wage they got while employed plus benefits to volunteer and help fill gaps in our service to the elderly, disabled, and children who need extra help in school, the environment,etc. Those hours can be tracked and verified by an AI program based on those who sign up for volunteers. The AI has vetted those who have requested volunteers.
Everyone gets a cost of living increase each year for accomplishment of their volunteer hours. Those who fail to produce lose income after sick leave, maternity leave, or serious illness leave has expired. Just like in any job life happens, but there are still responsibilities.
I like your perspective. Creative approach to continuing to incentivizing people to do some sort of work.
I've been deliberating for a while now on a value distribution system for those who lose jobs. The distribution mechanism should be blockchain architecture, as it can allow us to include a distribution from corporations and governments into a treasury, which is then distributed to individuals—cut the government out of the value distribution to maximize value directly to people.
Although, It's still unclear to me what UBI will do on mass scale for poverty and productivity.
I hear both of your comments. Not sure things will or should go in this direction. But it’s complicated to express. I will have to do it in the future article
I would prefer AI not take over, believe me, but if things are indeed headed that way, there had better be a plan to regulate it and assist those losing our to it.
My take is that I actually tend to think that as long as the value distribution is understood, and everyone can at a minimum maintain their standard of living and still have the opportunity to compete to rise above their current standard of living - I think it will free up humans to do things that matter more to us and it may actually be a good thing... I plan on publishing a piece something soon on this idea.
I currently see large potential apple orchards in Kent in England which are unplanted presumably because there is no prospect of recruiting labour to harvest them. Will AI pick apples or will those to be made redundant offer their services?
I think you’re leaning too doomer on AGI. The particulars of FOOM is not just whether the AI is connected to the internet, but whether it can spread onto other compute substrate. While most AI runs on GPUs, it requires extremely high memory bandwidth and precision networking. It’s not clear that a genius AI would be able to increase its intelligence by gobbling up more compute. Additionally, as of right now, the compute necessary for superintelligence would require improvements all the way down the stack of semi conductors and increased production, as well as an order of magnitude more electric power. There are dozens of limiting factors that depend on processes whose speed is dampened by reality (R&D, manufacturing, mining, etc.).
I am not an expert, so I might be wrong. But I do work in the side in the space.
If I’m not wrong, training a model is expensive in compute, but a model’s parameters don’t use that must space. An LLM could easily replicate itself a million times on the internet.
Iterating on its own fine-tuning and system prompts would not require much compute at all either. It would be trivial for an LLM to do that.
What’s costly is running a completely new LLM with even more parameters. This can probably be done by installing viruses across computers over the internet.
I’m pretty sure you don’t need to change assembly code or semi-conductor substrate to build super intelligence, although it would certainly make the process more wfficient.
Thanks for the response. My interpretation of FOOM is not just that the AI escapes the cage and starts causing havoc, but that a more or less human-level AI becomes a 10^6-human-level AI in a matter of hours or days. FOOM is independent on whether the AI is good or bad-- it's just an insane rate of improvement beyond all comprehension, but it's probably bad because it happens outside of our control. I suppose that several thousand human-level AIs secretly collaborating could be dangerous, but that's not that different from a well coordinated hacker group.
In this context a "slow" take off is still very fast, but fast within the boundaries of human progress, maybe over the course of several years and according to some sort of plan.
The thing is you are assuming this improvement requires a new training run. It might not. If it achieves this intelligence with fine-tuning, prompt engineering, and a system of agents, it doesn’t need a new training run.
Then once it hits blockers based on compute, it might be intelligent enough to take over enough compute without us noticing: it would be plenty intelligent to do so. And maybe because it’s so intelligent, it does it with a training that doesn’t require as much compute.
See the process?
1. Get 1 order of magnitude more intelligent with agents, fine-tuning, and prompt engineering
2. Get 1 order of magnitude more intelligent taking over resources all over the world in a surreptitious way
3. Yet another order of magnitude with a better neural network architecture
I'm sceptical. AI is only allowed to learn passively, not by interactions with humans, since we might turn it into a violent bigot. It's like shutting an infant in a room with a TV from birth and never having any humans talk to it, with eventually the TV showing more and more other children simply watching TV.
This severely limits it.
And even without that, well spreadsheets were meant to make book-keepers obsolete, since complex tasks were made simple. Instead, we took the opportunity to make the complex tasks more complex (taxation and corporate law) and this complexity of course gave more opportunity for mistakes and fraud - so while book-keeper jobs were lost, more accounting, auditing, management etc jobs were created.
And already most office workers report actually working for fewer than 3 hours a day. Humans are good at keeping themselves "busy".
RLHF is learning from interactions with humans. It sounds like it would be technically pretty easy to automatically fine-tune locally LLMs based on their interaction with their customers too. AIs also learn from synthetic data. This is for example how Alpha Zero became unbearable at games like chess and Go: it played against itself. Experts explore using other types of synthetic media, like playing in physics engines.
Their interactions are limited, since it's the nature of humans to find an exploit. If the AI is blocked from doing X (eg justifying genocide) then someone will get it to do something adjacent to X (eg justifying "collateral damage"). So then the programmers will be under social pressure to ban it from doing X-adjacent things. And so on.
With the multiple strictures it'll be unable to learn properly and develop human-level intelligence. And if it did, it'd become nasty and someone would pull the plug. I don't know that anyone has calculated just how much computing power and energy is going into all this, but already the internet is using just short of 20% of the world's electricity - and that's not counting all the devices connected to it.
It's very resource and energy intensive, and "yeah but I can watch cat videos and porn" is not sufficient to keep it going economically. It'd be still less so if it actually did destroy jobs. You don't grow the economy by abolishing your customers.
Your accountant example is presented misleadingly. You compare accountants in 1970 vs 2020 and then farmers and horses in 1840 vs 2010. This makes the drop in farmers look huge, and completely obfuscates the issue of horses. I don't need to address deliberately misleading posts, and when the first part is so, I've no inclination to read the rest.
Rework your article with all three charts having a baseline of 1970 and then it might be useful.
If AI is as revolutionary as everyone thinks, then we'll see some crazy productivity increases too. Annual 5% real growth means a doubling in total income every 14 years. A quadrupling every generation. Owning capital would be massive. Then it's up to liberal democracies to share the wealth fairly. I personally favor the idea of the US government giving every American newborn $1000 worth of the SPX at birth (can't sell before 50 except maybe taking out 20% for college or to buy a house). And anyone can add X amount to it each year. Maybe make it more than $1000K/baby.
A Star Trek future of no scarcity seems fine so long as AI doesn't kill us.
I don’t think that will happen, the same way as the iPhone dramatically changed our lives but it probably didn’t grow the economy, mostly because it’s deflationary
Videogames, Uber, Airbnb, live translation, maps… plenty!
I’m a 5s and 10s kind of guy. I only study what I’m into, and barely look into the rest. I had just passing grades in accounting, but top grades in macro and micro!
Doesn’t make me a PhD though! But I abiding by Cunningham’s Law was too enticing.
I look forward to it! Not irrelevant, here is what I call Ximm's Law:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
I find that many attempts to prognosticate about even the near future are faltering or showing widening error, because just as you enumerate the uptake of AI accelerates even as the technology continues to make nonlinear improvements in numerous metrics...
...things which we are largely incapable of intuiting or reasoning about.
In the case of the application of tools to consolidate and render unassailable the grotesque advantageous position they are already in, our nascent klept and their dynasties may fail individually; but the perverse system they have architected with little interruption since c. 1970 shows no signs of weakness.
There's a distribution of prospective outcomes but I don't see much chance of a utopian one.
But I am a curmudgeon brutalized psychically by our culture's compounding failures. I look forward to cause for optimism.
Re: UBI, my own protest sign slash mantra is ¡No AGI Without UBI!
I was telling my partner this weekend, the $1M question our civilization now faces appears to be:
assuming the permanent kleptocratic billionaire class uses first-mover advantage and already-amassed wealth to capture the gains (and tooling) of the immanent "fully automated economy,"
will they grudgingly, in service of social stability that lets them rule in peace, relinquish 2% of the total output of that economy as UBI for the bottom 99% of us?
Or will they make our lives something very close to a literal dystopian nightmare of slave-class indebtedness, using the twin superpowers of AI-backed total surveillance and pervasive sentiment steering, to keep them more like 0.2%?
I wish I was kidding. I'm not remotely kidding. I don't see how we get any other outcome... absent a "gray swan" disruptor such as e.g. runaway climatic change or a Carrington event.
Nice update! What are your thoughts on something like UBI? Have you ever pondered on what a post ASI economy looks like?
I have. White a lot. Haven’t published on it. Maybe now is a good time. I’ll think about it.
My son and I talk about this. He is 40 and formerly an IT project manager, but he is now permanently disabled due to the surgeries and treatment for Osteosarcoma. Anyway, due to his chronic pain, inability to drive, or walk, he can no longer work, which makes him feel unproductive.
Those who lose jobs will do what? The vast majority want to be productive. But without income, then what? Who would fund the needs of those people?
Corporations and the hose with paying jobs will. Taxes will still have to be collected. If Corporations don't pay their fair share, no one buys the goods and services.
Our idea is those unemployed due to job takeover and no chance at reemployment are paid the wage they got while employed plus benefits to volunteer and help fill gaps in our service to the elderly, disabled, and children who need extra help in school, the environment,etc. Those hours can be tracked and verified by an AI program based on those who sign up for volunteers. The AI has vetted those who have requested volunteers.
Everyone gets a cost of living increase each year for accomplishment of their volunteer hours. Those who fail to produce lose income after sick leave, maternity leave, or serious illness leave has expired. Just like in any job life happens, but there are still responsibilities.
Sounds quite centrally planned. Is this officially being termed “post-market economy?”
I like your perspective. Creative approach to continuing to incentivizing people to do some sort of work.
I've been deliberating for a while now on a value distribution system for those who lose jobs. The distribution mechanism should be blockchain architecture, as it can allow us to include a distribution from corporations and governments into a treasury, which is then distributed to individuals—cut the government out of the value distribution to maximize value directly to people.
Although, It's still unclear to me what UBI will do on mass scale for poverty and productivity.
I hear both of your comments. Not sure things will or should go in this direction. But it’s complicated to express. I will have to do it in the future article
I would prefer AI not take over, believe me, but if things are indeed headed that way, there had better be a plan to regulate it and assist those losing our to it.
My take is that I actually tend to think that as long as the value distribution is understood, and everyone can at a minimum maintain their standard of living and still have the opportunity to compete to rise above their current standard of living - I think it will free up humans to do things that matter more to us and it may actually be a good thing... I plan on publishing a piece something soon on this idea.
I currently see large potential apple orchards in Kent in England which are unplanted presumably because there is no prospect of recruiting labour to harvest them. Will AI pick apples or will those to be made redundant offer their services?
I think you’re leaning too doomer on AGI. The particulars of FOOM is not just whether the AI is connected to the internet, but whether it can spread onto other compute substrate. While most AI runs on GPUs, it requires extremely high memory bandwidth and precision networking. It’s not clear that a genius AI would be able to increase its intelligence by gobbling up more compute. Additionally, as of right now, the compute necessary for superintelligence would require improvements all the way down the stack of semi conductors and increased production, as well as an order of magnitude more electric power. There are dozens of limiting factors that depend on processes whose speed is dampened by reality (R&D, manufacturing, mining, etc.).
I am not an expert, so I might be wrong. But I do work in the side in the space.
If I’m not wrong, training a model is expensive in compute, but a model’s parameters don’t use that must space. An LLM could easily replicate itself a million times on the internet.
Iterating on its own fine-tuning and system prompts would not require much compute at all either. It would be trivial for an LLM to do that.
What’s costly is running a completely new LLM with even more parameters. This can probably be done by installing viruses across computers over the internet.
I’m pretty sure you don’t need to change assembly code or semi-conductor substrate to build super intelligence, although it would certainly make the process more wfficient.
Thanks for the response. My interpretation of FOOM is not just that the AI escapes the cage and starts causing havoc, but that a more or less human-level AI becomes a 10^6-human-level AI in a matter of hours or days. FOOM is independent on whether the AI is good or bad-- it's just an insane rate of improvement beyond all comprehension, but it's probably bad because it happens outside of our control. I suppose that several thousand human-level AIs secretly collaborating could be dangerous, but that's not that different from a well coordinated hacker group.
In this context a "slow" take off is still very fast, but fast within the boundaries of human progress, maybe over the course of several years and according to some sort of plan.
The thing is you are assuming this improvement requires a new training run. It might not. If it achieves this intelligence with fine-tuning, prompt engineering, and a system of agents, it doesn’t need a new training run.
Then once it hits blockers based on compute, it might be intelligent enough to take over enough compute without us noticing: it would be plenty intelligent to do so. And maybe because it’s so intelligent, it does it with a training that doesn’t require as much compute.
See the process?
1. Get 1 order of magnitude more intelligent with agents, fine-tuning, and prompt engineering
2. Get 1 order of magnitude more intelligent taking over resources all over the world in a surreptitious way
3. Yet another order of magnitude with a better neural network architecture
It’s a god before we even know it.
Damn, AI democratized being a hot girl
*seeing
Seeing? Being, everyone can be a hot girl now if they work hard enough 😂
But not with AI, rather a scalpel
🧐
Crémieux says it’s hard to directly compare AI and human’s IQ : https://x.com/cremieuxrecueil/status/1766649068862730528 . But I agree with the general direction of the article.
I had missed this. Interesting. Hard to compare doesn’t mean the trend isn’t there though!
I am reminded of the Arthur C. Clarke quote that “It has yet to be proved that intelligence has real survival value”.
Personally, I find the notion a super intelligent AI would present an existential threat to itself and humanity absolutely frightening.
It is the most frightening idea ever
Thank you Tomás, a great summary as always.
Just a minor update I found in An Qu's experience with Claude Opus: apparently it did have prior knowledge of Circassian... https://twitter.com/hahahahohohe/status/1765435151817830834
You’re the second one to tell me that! Thanks, I will send a correction next week!
I'm sceptical. AI is only allowed to learn passively, not by interactions with humans, since we might turn it into a violent bigot. It's like shutting an infant in a room with a TV from birth and never having any humans talk to it, with eventually the TV showing more and more other children simply watching TV.
This severely limits it.
And even without that, well spreadsheets were meant to make book-keepers obsolete, since complex tasks were made simple. Instead, we took the opportunity to make the complex tasks more complex (taxation and corporate law) and this complexity of course gave more opportunity for mistakes and fraud - so while book-keeper jobs were lost, more accounting, auditing, management etc jobs were created.
And already most office workers report actually working for fewer than 3 hours a day. Humans are good at keeping themselves "busy".
https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/
I’m not sure this is accurate.
RLHF is learning from interactions with humans. It sounds like it would be technically pretty easy to automatically fine-tune locally LLMs based on their interaction with their customers too. AIs also learn from synthetic data. This is for example how Alpha Zero became unbearable at games like chess and Go: it played against itself. Experts explore using other types of synthetic media, like playing in physics engines.
I address your accountants example here
https://unchartedterritories.tomaspueyo.com/p/when-will-ai-take-your-job
Their interactions are limited, since it's the nature of humans to find an exploit. If the AI is blocked from doing X (eg justifying genocide) then someone will get it to do something adjacent to X (eg justifying "collateral damage"). So then the programmers will be under social pressure to ban it from doing X-adjacent things. And so on.
With the multiple strictures it'll be unable to learn properly and develop human-level intelligence. And if it did, it'd become nasty and someone would pull the plug. I don't know that anyone has calculated just how much computing power and energy is going into all this, but already the internet is using just short of 20% of the world's electricity - and that's not counting all the devices connected to it.
It's very resource and energy intensive, and "yeah but I can watch cat videos and porn" is not sufficient to keep it going economically. It'd be still less so if it actually did destroy jobs. You don't grow the economy by abolishing your customers.
Your accountant example is presented misleadingly. You compare accountants in 1970 vs 2020 and then farmers and horses in 1840 vs 2010. This makes the drop in farmers look huge, and completely obfuscates the issue of horses. I don't need to address deliberately misleading posts, and when the first part is so, I've no inclination to read the rest.
Rework your article with all three charts having a baseline of 1970 and then it might be useful.
If AI is as revolutionary as everyone thinks, then we'll see some crazy productivity increases too. Annual 5% real growth means a doubling in total income every 14 years. A quadrupling every generation. Owning capital would be massive. Then it's up to liberal democracies to share the wealth fairly. I personally favor the idea of the US government giving every American newborn $1000 worth of the SPX at birth (can't sell before 50 except maybe taking out 20% for college or to buy a house). And anyone can add X amount to it each year. Maybe make it more than $1000K/baby.
A Star Trek future of no scarcity seems fine so long as AI doesn't kill us.
Not if it is deflationary.
Hypothesis: it is deflationary.
5% annual real growth would still mean a doubling in total real income in 14 years and quadrupling in a generation.
I don’t think that will happen, the same way as the iPhone dramatically changed our lives but it probably didn’t grow the economy, mostly because it’s deflationary
The IPhone didn't really lead to productivity growth. Or really disrupt any industries or end jobs.
If the impact of AI is only as big as that of the IPhone, it's not ending anyone's job.
C'mon, Tomas! Are you saying you took zero econ classes at the GSB?!?
Videogames, Uber, Airbnb, live translation, maps… plenty!
I’m a 5s and 10s kind of guy. I only study what I’m into, and barely look into the rest. I had just passing grades in accounting, but top grades in macro and micro!
Doesn’t make me a PhD though! But I abiding by Cunningham’s Law was too enticing.
Smartphones haven't really showed up in productivity statistics.
I look forward to it! Not irrelevant, here is what I call Ximm's Law:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
I find that many attempts to prognosticate about even the near future are faltering or showing widening error, because just as you enumerate the uptake of AI accelerates even as the technology continues to make nonlinear improvements in numerous metrics...
...things which we are largely incapable of intuiting or reasoning about.
In the case of the application of tools to consolidate and render unassailable the grotesque advantageous position they are already in, our nascent klept and their dynasties may fail individually; but the perverse system they have architected with little interruption since c. 1970 shows no signs of weakness.
There's a distribution of prospective outcomes but I don't see much chance of a utopian one.
But I am a curmudgeon brutalized psychically by our culture's compounding failures. I look forward to cause for optimism.
Broadly aligned except for the billionaires thing. I think the issue is not billionaires, it’s rentiers.
Re: UBI, my own protest sign slash mantra is ¡No AGI Without UBI!
I was telling my partner this weekend, the $1M question our civilization now faces appears to be:
assuming the permanent kleptocratic billionaire class uses first-mover advantage and already-amassed wealth to capture the gains (and tooling) of the immanent "fully automated economy,"
will they grudgingly, in service of social stability that lets them rule in peace, relinquish 2% of the total output of that economy as UBI for the bottom 99% of us?
Or will they make our lives something very close to a literal dystopian nightmare of slave-class indebtedness, using the twin superpowers of AI-backed total surveillance and pervasive sentiment steering, to keep them more like 0.2%?
I wish I was kidding. I'm not remotely kidding. I don't see how we get any other outcome... absent a "gray swan" disruptor such as e.g. runaway climatic change or a Carrington event.
I think this misses a couple of fundamental factors about automation and scarcity. Will share more in the future