Bostrom and Tegmark have been great at introducing me to great illustrations of these problems. Yudkowsky is the one that seems to have thought of everything.
Agreed that the consciousness & free will debate doesn't lead anywhere for me. I said what I had to say on the topic, and that's it.
Bostrom and Tegmark have been great at introducing me to great illustrations of these problems. Yudkowsky is the one that seems to have thought of everything.
Agreed that the consciousness & free will debate doesn't lead anywhere for me. I said what I had to say on the topic, and that's it.
Agreed AGI is not imminent, but it could be a matter of a handful of years, so attention is warranted, albeit not crazy urgency like with COVID.
Agreed that DeepMind needs a lot of scrutiny. Funnily, Yudkowsky is more concerned about Demis Hassabis than Altman I believe.
Mrs Davis looks good! I don't have Peacock tho...
Alien probability is much lower than AI!
The biggest issue with misalignment is instrumental convergence: No matter what it desires, odds are we will be on the way because we don't want to lose control and it would know it.
For me a risk of 1% of total human extinction is inacceptable.
Thank you Tomas and Jerry for this very interesting exchange.
Tomas- you got me really scared as I remember your Covid letter was a very useful wake-up call for me at the time.
I don’t have the knowledge to understand all the specifics but your arguments are compelling and it was quite balancing to read Jerry’s reply after 24hrs of mild queasy worry.
Obviously the drama that has happened at Openai was for me the best proof that what you’re saying is plausible. It seems the convincing explanation for this extreme behavior from Openai board people. we’re talking about very smart people (or so we hope) so we better hope they had good reason for acting like this.
Several things to
understand the board’s motives:
-If China is as engaged in this AI race as the Silicon Valley guys, what good would it be to humanity to damage one of the best contenders ? Was the board’s motive just to put the subject of alignment on the table and in the public’s eye?! (They did big time!)
-Why wouldn’t them have anticipated the backfiring of key stakeholders (employees, MS, Ilya…) or on the contrary, did they anticipate it and was it just a stunt instead of purely resigning (which is the current outcome).
Other food for thoughts that your article brought : is it only our brains that set us apart? Human Intelligence is also emotional and intuitive. This can’t easily be automated… or can it?
For my own peace of mind I choose to believe Jerry is right and Openai board is clumsy …but I nonetheless will be reading your further newsletters with an accrued interest…
CHINA: Helen Toner, one of the ousted board members at OpenAI, has actually some great points about this. She is an expert in Chinese AI, and says (1) their data is a shitshow, and (2) the fact that they need to censor it so much makes it go much more slowly and is less useful. She says China is a fast-follower, not a lead. One of the reasons to keep advancing fast in the West
ANTICIPATION: This doesn't happen! When has this happened? Jobs was ousted from Apple and nothing happened. CEOs are frequently ousted. They could have figured it out if they had played mental chess on the ramifications, but that's probably not what they paid attention to. They had enough to think about when considering AI safety
HUMAN BRAINS: I talk about it in today's article. The short is: a human brain is a physical thing that can be replicated. As long as it uses physics, it can be replicated. Emotions and intuitions are simply decisions that don't reach consciousness ("System 1 vs System 2") to save processing power. Totally modelable.
Without commenting on the wider discussion, I wanted to point out that at least certain people/teams within OpenAI think theres a good chance we’ll get to AGI to or beyond this decade: https://openai.com/blog/introducing-superalignment
Thank you, that's fair!
Bostrom and Tegmark have been great at introducing me to great illustrations of these problems. Yudkowsky is the one that seems to have thought of everything.
Agreed that the consciousness & free will debate doesn't lead anywhere for me. I said what I had to say on the topic, and that's it.
https://unchartedterritories.tomaspueyo.com/p/does-free-will-exist
Agreed AGI is not imminent, but it could be a matter of a handful of years, so attention is warranted, albeit not crazy urgency like with COVID.
Agreed that DeepMind needs a lot of scrutiny. Funnily, Yudkowsky is more concerned about Demis Hassabis than Altman I believe.
Mrs Davis looks good! I don't have Peacock tho...
Alien probability is much lower than AI!
The biggest issue with misalignment is instrumental convergence: No matter what it desires, odds are we will be on the way because we don't want to lose control and it would know it.
For me a risk of 1% of total human extinction is inacceptable.
Thank you Tomas and Jerry for this very interesting exchange.
Tomas- you got me really scared as I remember your Covid letter was a very useful wake-up call for me at the time.
I don’t have the knowledge to understand all the specifics but your arguments are compelling and it was quite balancing to read Jerry’s reply after 24hrs of mild queasy worry.
Obviously the drama that has happened at Openai was for me the best proof that what you’re saying is plausible. It seems the convincing explanation for this extreme behavior from Openai board people. we’re talking about very smart people (or so we hope) so we better hope they had good reason for acting like this.
Several things to
understand the board’s motives:
-If China is as engaged in this AI race as the Silicon Valley guys, what good would it be to humanity to damage one of the best contenders ? Was the board’s motive just to put the subject of alignment on the table and in the public’s eye?! (They did big time!)
-Why wouldn’t them have anticipated the backfiring of key stakeholders (employees, MS, Ilya…) or on the contrary, did they anticipate it and was it just a stunt instead of purely resigning (which is the current outcome).
Other food for thoughts that your article brought : is it only our brains that set us apart? Human Intelligence is also emotional and intuitive. This can’t easily be automated… or can it?
For my own peace of mind I choose to believe Jerry is right and Openai board is clumsy …but I nonetheless will be reading your further newsletters with an accrued interest…
Salut Marie! Gros plaisir de te revoir par ici!
CHINA: Helen Toner, one of the ousted board members at OpenAI, has actually some great points about this. She is an expert in Chinese AI, and says (1) their data is a shitshow, and (2) the fact that they need to censor it so much makes it go much more slowly and is less useful. She says China is a fast-follower, not a lead. One of the reasons to keep advancing fast in the West
ANTICIPATION: This doesn't happen! When has this happened? Jobs was ousted from Apple and nothing happened. CEOs are frequently ousted. They could have figured it out if they had played mental chess on the ramifications, but that's probably not what they paid attention to. They had enough to think about when considering AI safety
HUMAN BRAINS: I talk about it in today's article. The short is: a human brain is a physical thing that can be replicated. As long as it uses physics, it can be replicated. Emotions and intuitions are simply decisions that don't reach consciousness ("System 1 vs System 2") to save processing power. Totally modelable.
Looking forward to your opinions!
Without commenting on the wider discussion, I wanted to point out that at least certain people/teams within OpenAI think theres a good chance we’ll get to AGI to or beyond this decade: https://openai.com/blog/introducing-superalignment