Beware the Squirrel
Why worries about AI killing us all are distracting from the real problems
When I tell people I’m writing a book about AI (forgive the early plug) they usually reply with something along the lines of, “ooh, good timing!” The inference being that AI is currently the hot topic and thus the audience for said book should be bigger than it was, say, a year or even six months ago, i.e. Before ChatGPT. Being of polite disposition I mostly smile and nod at this, while inside I’m thinking, “Really? You think AI is hot now?”
Because, try becoming Global Head of Policy for DeepMind (now Google DeepMind, which I left a couple of years ago) in April 2016, a month after their AlphaGo program triumphed over the Go world champion, Lee Sedol, across five tense matches in Seoul that were streamed to a bigger audience than the Super Bowl. Or in 2017, when Xi Jinping announced his intention that China should be the leading AI superpower by 2030 and Putin glowered that “whoever becomes leader in [AI] will be ruler of the world.” The multitude of think pieces, reports, government roundtables, and books about the dangers of an optimising paperclip machine during the first big AI hype cycle of 2015-7 was dizzying.
No, AI has been exhaustingly ‘hot’ for quite some time. Admittedly the hype had somewhat quietened, as popular fashions turned instead to, variously, cryptocurrency, NFTs, and the metaverse. But since the end of the so-called ‘AI winter(s)’ and the start of the latest AI explosion from the 2010s, driven in part by growth in access to data and availability of computer power, there has been a steady stream of funding and an insatiable appetite for anything called ‘AI’.
To be clear, this reality check is not intended as some kind of admonition. To start with, if you escaped the 2015-7 AI hype cycle, then lucky you. You did not have to read endless articles illustrated by a robot hand holding a human hand, or deal with the Effective Altruists (notorious proponents of AI ‘long-termism’ and dominant in AI safety circles) before they were chastened by Sam Bankman-Fried. You skipped Microsoft’s racist chatbot ‘Tay’ and jumped straight to its slightly-creepy chatbot ‘Sydney’. No, far from feeling rebuked you should feel relieved.
AI is, however, reaching a new peak of mania in audiences who have not found themselves so dazzled before. The reason, of course, is ChatGPT: the most viral of all viral AI breakthroughs and the first time that most people could actually get their hands on something tangible to make sense of what ‘AI’ might really mean for them. Because the spam filters in our inboxes, or the algorithms that determine what we watch and listen to, whose tweets we view or what ads we are shown are not really marketed as AI anymore. They are too ubiquitous. But give someone a large language model to play with, and the chance to write a haiku about The Antiques Roadshow, and – boom. Suddenly we’re all having to talk again about whether AI will cause human extinction.
It’s admittedly a big jump from ChatGPT to end-of-the-world, but those of us in AI for a long time have always had to contend with the Skynet references. I thought the worst had passed, the fearmongers largely confined to the fringe. Now, however, because AI has a human-ish voice, if not face (as Suresh Venkatasubramanian and Alondra Nelson have noted, the decision to make ChatGPT a human-like interface that feels like texting is a deliberate design choice) it seems that this notion of an out-of-control system that will somehow kill us all has gained new traction, and in doing so has scared a whole new bunch of people.
It would be much easier for me to write a post that scares the hell out of you, too. I’ve been watching people do this since at least 2016, and it’s certainly an easier and more dramatic story to write. But I just don’t think that it’s true.
Keep reading with a 7-day free trial