2 Comments
⭠ Return to thread

Most of the AI Safety discourse that's actually influencing federal and legislative agendas right now is indeed rooted in these sorts of concerns: weaponry, surveillance, propaganda, economics. As Tomas mentions in the article, Helen Toner has been working to decelerate the US/China arms race as it relates to AI. While Yudkowski is talking about AGI, present-day and frontier AI are being appropriately scrutinized by the likes of Matthew Butterick, Timnit Gebru, Alex Hanna, Margaret Mitchell, Emily M. Bender, and others (including all the folks at https://www.stopkillerrobots.org/ ).

However, your concern that it would somehow give terrorist groups an advantage over nuclear powers is misguided. The nuclear powers can afford more & better AI than small states. If anything, AI tips the scales further in whatever direction they're already tipped.

Expand full comment

Thanks for the link, Max. I want to be wrong & sure hope AI drone terrorism does not become a reality either by ideological groups or rogue states or proxies covertly armed by major powers in collusion with them for that matter. This is not a question of keeping the balance of power but one side gaining power good enough to inflict severe destruction on others even if the major powers have more on their hands for total mutual assured destruction. i dont know enough on all this and am following developments on how ukraine with western support is managing to even it with russia using drone warfare even as russia is getting such support from iran & china to do so.

Expand full comment