Discussion about this post

User's avatar
EB's avatar

Your question reminds me of a interesting passage in the book The Maltese Falcon. Sam Spade tells Bridget that he was hired by a woman to find her husband who disappeared. Sam finds him and asks why he left. He tells Sam that one day while walking to work a girder fell from a crane and hit the sidewalk right in front of him. Other than a scratch from a piece of concrete that hit his cheek, he was unscathed. But the shock of the close call made him realize that he could die any moment and he realizef that if that's true, then he wouldn't want to spend his last days going to a boring job and going home every evening to have the same conversation and do the same chores. He tossed all that and started wandering the world, worked on a freight ship, etc. When Sam finds hm however, the man has a new family, lives in a house not far from his other one, and goes to a boring job every day. Bridget is confused and asks why he went back to the same routines. Sam says "when he thought that he could die at any moment he changed his entire life. But even he realized over time that he wasn't about to die, he went back to what he was familiar with." And that's my long winded answer to your question. Until I have solid evidence that AI, an asteroid, or Trump's election are going to end my life, I would continue doing the same. Going by how many stupid mistakes ChatGPT makes, I'm not worried about it destroying humanity.

Expand full comment
Ammon Haggerty's avatar

I try to keep an open mind to the progress and value of LLMs and GenAI, so I'm actively reading, following, and speaking with "experts" across the AI ideological spectrum. Tomas, I count you as one of those experts, but as of late, you are drifting into the hype/doom side of the spectrum (aka, Leopold Aschenbrenner-ism) — and not sure that's your intention. I recommend reading some Gary Marcus (https://garymarcus.substack.com/) as a counter balance.

You did such a wonderful job taking an unknown existential threat (COVID), grounding it in science and actionable next steps. I'd love to see the same Tomas applying rational, grounded advice to AI hype, fears and uncertainties — it's highly analogous.

Here's my view (take it or leave it). LLMs started as research without a lot of hype. Big breakthrough (GPT 3.5) unlocked a step-change in generative AI capabilities. This brought big money and big valuations. The elephant in the room was that these models were trained on global IP. Addressing data ownership would be an existential threat to the companies raising billions. So they go on the defensive — hyping AGI as an existential threat to humanity and the almost certain exponential growth of these models (https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362). This is a red herring to avert regulatory and ethics probes into the AI companies' data practices.

Now the models have largely stalled out because you need exponential data to get linear model growth (https://forum.effectivealtruism.org/posts/wY6aBzcXtSprmDhFN/exponential-ai-takeoff-is-a-myth). The only way to push models forward is to access personal data, which is now the focus on all the foundation models. This has the VCs and big tech that have poured billions into AI freaked out and trying to extend the promise of AGI as long as possible so the bubble won't pop.

My hope is that we can change the narrative to the idea that we've created an incredible new set of smart tools and there's a ton of work to be done to apply this capability to the myriad of applicable problems. This work needs engineers, designers, researchers, storytellers. In addition, we should address the elephant in the room and stop allowing AI companies to steal IP without attribution and compensation — they say it's not possible, but it is (https://www.sureel.ai/). We need to change the narrative to AI as a tool for empowerment rather than replacement.

Most of the lost jobs of the past couple years have not been because of AI replacement, they are because of fear and uncertainty (driven by speculative hype like this), and without clear forecasting, the only option is staff reduction. Let's promote a growth narrative and a vision for healthy adoption of AI capabilities.

Expand full comment
145 more comments...

No posts