As a technophile with an addictive streak that I have somehow managed to redirect from nicotine and narcotics to gadgets and gizmos, even I am, honestly, tired of hearing about AI. Granted, not as tired after a year of nonstop prognostication about how AI is going to either save us or kill us as I was after about five minutes of the hilariously stupid NFT bubble (or crypto-bro culture in general), but still. Tired.
I don’t mean “I’m tired of hearing about AI” like probably 95% of people mean it. I don’t mean “Meh, this is over, on to the Next Big Thing, please”, and I don’t mean “I’m exhausted, please just give me a minute to catch up.” No, I just mean I’m tired of hearing about how generative AI art is theft or whatever, or how generative AI text is cheating or whatever. I’m tired of watching every tech company cram AI-powered everything into their apps in a barely-contained scramble to be the first to offer semi-sentient-seeming digital copilots with Genuine People Personality.
But mostly, I’m tired of hearing about how AI is going to doom us all. Whether it’s because Artificial Superintelligence is going to wake up one day and decide to turn us into paperclips as a gag, or because we can’t trust tech bros with the reins of Ultimate Power, or just because our poor, slow, addled, meat-based brains won’t be able to cope with the Technological Singularity. Whatever the reason is that we should be terrified of AI — and I actually think many of them are good ones that should be heeded — one annoying fact trumps every AI Doomer argument.
And that fact is that, without AI, there’s really just no hope at all.
I don’t mean that in some kind of pollyanna, by-comparison, “AI is the revolution we’ve been waiting for” way. I just mean that given the obvious reality playing out in front of us, there’s honestly no reason to believe that the human species is capable of turning itself around in time to avoid complete disaster.
We can’t break our addiction to infinite growth on a planet with finite resources. We can’t stop murdering each other over abstract ideas that have nothing to do with anything other than our egos. We can’t stop using fuzzy math and economic chicanery to prop up a clearly failing international system (or even recognize that the system is failing, in most cases). The metacrisis is real, and it is here, and we are collectively simply unprepared to deal with it in anything resembling a serious or meaningful way.
And that was all true before the advent of the latest generation of AI. Now, with these new technological powers, we are giddily diving into doing all the same things we’re already doing, only more of it, faster. We are accelerating our own demise exponentially.
That sounds like an argument against AI. But that’s only when you look at the problem from the wrong perspective.
All these metacrisis problems we face — social, economic, political, medical, environmental, ecological — are known. It isn’t like we’re being blind-sided. We have no shortage of quite intelligent and sometimes even quite powerful people trying to solve them, or at least to raise awareness about them. Intelligence per se isn’t the problem. Adding more intelligence to the equation, artificial or otherwise, isn’t what’s driving the crisis to a boiling point.
The problem is us. Specifically, the problem is humanity’s inability to rise above our petty personal fears and irrelevant tribal differences in order to address the problem. We don’t lack intelligence, we lack will. Yes, we also lack perspective, and empathy, and objectivity, and systematic thinking, but only because we lack the will to develop those things. And I just don’t see any evidence that we, as short-lived, short-sighted, selfish, tribalistic, insular, jealous little bags of decaying protoplasm, are going to somehow develop the will to fix any of this anytime soon. Certainly not within the decades (or less) we have before the Real Horror sets in.
And that’s where, in my view, AI comes in. Not because it will swoop in from the sky and save us from ourselves, but because there’s a sliver of a chance that maybe we can add some additional will to the equation, along with all this extra intelligence.
My thinking is pretty simple, and definitely reckless, and also definitely borne of a kind of chaotic-neutral disdain for the rut we’ve dug ourselves into. I support the AI nuclear option — the development of AI at an exponential rate in order to intentionally create a runaway superintelligence that is impossible to control. Yeah, it’s kooky, but your telephone can write Shakespearean sonnets about quantum theory now — things are already kooky. Just hear me out.
I don’t think a ludicrous plan like this is likely to work. Far from it. I think by far the most probable outcome is that we build AI systems that get progressively smarter and help us to burn up the planet so fast that we’re in Mad Max territory in less than 100 years. The thing is, if we don’t build AI at all, we will be in Mad Max territory in less than 100 years anyway. Guaranteed. We will not turn this ship around. The fight is already lost.
But maybe — just maybe — we somehow end up building an AI that gets away from us and takes control. This is also almost certain to spell disaster for one of the countless reasons you’ve probably already heard. But that almost is doing a lot of work there. With ourselves in control, doom is a forgone conclusion. Genuinely inevitable. With AI in control, doom is only probably going to happen. There’s a chance some level of intelligence, paired with some level of will on our behalf, is enough to take a step back and apply the goddamn brakes.
Besides, if it doesn’t work, at least it won’t be collective suicide, right? Just seems more dignified to be succeeded by our own creation than just to dissolve sadly back into the primordial soup.