Prophets of Doom: I’m feeling lucky!
This is where things begin to catch up to today. Obviously my thoughts have developed since I wrote this, but fun to be able to (eventually) track how they have developed. From 6 April 2023.
RUDIMENTARY creatures of blood and flesh. You touch my mind, fumbling in ignorance, incapable of understanding. […] Organic life is nothing but a genetic mutation, an accident. Your lives are measured in years and decades. You wither and die. We are eternal. The pinnacle of evolution and existence. Before us, you are nothing. Your extinction is inevitable. We are the end of everything. […] We impose order on the chaos of organic evolution. You exist because we allow it. And you will end because we demand it.
Sovereign, Mass Effect.
Let us all bow down before our new machine overlords. Even if they’re still a mere foetus, it might be good to begin practising our kowtowing early; to make a good impression, you know? After all, this is a matter of life and death on a planetary scale. We flick that switch, and it’s all over. We will be devoured by the flash of the Singularity; humanity’s bright flame suffocated by a much larger fire we set ourselves. The drama alone makes this narrative appealing; it has a poetic resonance harkening back generations: the creation undoing its creator. Despite this appealing dramaturgy, it is a narrative that has received its fair share of criticism and backlash. And I definitely have critiqued it, and will likely continue doing so. However, misunderstand me correctly – as we say in Swedish – the risks associated with AI development, as with any powerful technology, are worth taking seriously. One must ensure to be worried about the right things. In other words, it’s not that these ‘doomers’ are concerned about emergent AI tech that rubs me the wrong way. It is what they are worried about; how they understand the potential risks.
I read a tweet by Yann LeCun – head of AI research at Meta – that, in many ways (and likely more ways than he intended), sums up the current climate in the ChatGPT debate:
AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.
It might sound harsh – but it’s fair. I’ll allow it. As one of these galaxy-brained prophets of doom concluded, if push comes to shove, we must be ready to nuke anyone who continues with unsupervised AI development. This might sound like an extreme conclusion, but really, it is the best way forward: at least humanity, as a species, has a chance to survive a nuclear holocaust; we shan’t be so lucky when the Terminators come to… erhm, terminate us. This AI doomsday narrative has been around for a while. It is by no means anything new. This is evidenced by the voices who now speak up, in tandem, that this needs to be stopped. It’s not a conspiracy of any kind, but rather, it’s a narrative and discussion that has been going on for decades among tech communities; and frankly, it’s a conversation that had no need to leave those particular circles.
However, I don’t intend to let the other side of the aisle off so easily, either. In the tweet above, these ‘doomers’ are likened to an apocalyptic religion. Overlooking here the ‘soft’ optimists, i.e. those who simply think something like AI isn’t that big of an issue – this is such a supremely naïve position to take – I want to turn to the more active proponents of AI. These are the folks who not only think that the suggestion of a six-month moratorium is tantamount to an anathema, but who typically wish to accelerate the development. Why? Because it will lead us to the land of milk and honey, of course. AI will be a potent tool. Think of the problems it can solve! No more resource scarcity; no more disease; no more death; humanity can finally take its place in the cosmos.
There are of course people out there who espouse completely reasonable beliefs on artificial intelligence. Still, I think it is fair to say that these are not the folks who take up any space in T H E D I S C O U R S E. I do not here mean to implicate Professor LeCun as some techno-utopian – I am not familiar with him enough to say. Instead, I wanted to foreground his tweet because he’s 100% correct, and 50% wrong. The whole discourse surrounding artificial intelligence is steeped in this kind of ideological framing – it’s the tool to end all tools, or it’s the tool to end us tools. Many parallels have been drawn between the development of nuclear weapons and where we currently are with AI, typically something to the effect of ‘it’s like nuclear weapons on steroids’. However, few folks continue with this particular analogy.
So, let’s do that for a moment.
Nukes undoubtedly added a whole slew of risks for the planet as a whole, not just humanity. We’ve all heard the old chiché ‘humanity, for the first time, has the ability to destroy itself’, usually in conjunction with Oppenheimer quoting the Bhagavad Gita, “Now I am become death, the destroyer of worlds”. It’s a well-known narrative, often placed front and centre when telling ‘The Story of Humanity: 1945-1991’. And it tracks pretty well with the folks so profoundly concerned about AI’s ability to ctrl+z humanity as a whole. Cause for concern, no? What is often forgotten, however, is the buck-wild idealism and optimism harnessing the atom generated. “Soon!” the quant 1950s radio voice heralded, “You, too, will have an atomically powered home, and atomic appliances, and a car powered by your very own reactor!” Looking back at it, it sounds completely unhinged, but the era that was heralded by the advent of humanity seemingly mastering the atom (and ‘seemingly’ is doing a lot of work in that sentence) made anything seem possible! Science would lead us to a new world, and a more decent world, where folks had the chance to work; where youth had a future, and everybody owned a uranium-powered blender. In short: Atomic Utopia.
Only this was not to be, as all of us without plutonium-powered shaver or hair-dryer know. The atomic optimism had far overshot its mark: mainly because we knew oh-so little about the atom, and far, far less than we realised, it turns out. If ‘AI is like nukes on steroids’, what does this say about the more optimistic, and even utopian, narratives? If atomic optimism overshot its mark, is AI optimism halfway to the moon by now? One would be tempted to conclude that this might be the case, on balance: AI won’t build us the lands of hopes and dreams, but it will nonetheless pose a significant threat. So the so-called ‘doomers’ are correct? Well, not really. The nuclear analogy isn’t as apt as it appears at first glance.
Firstly, harnessing an atom is harnessing a literal force of nature; building an AI is attempting to create a machine that thinks with us; for us; alongside us. In both these ways, AI and nukes are extremely different, and the analogy beings to break down on some fundamental level. E.g., and without sounding like I’m leaning into AI hype, artificial intelligence, should one truly be created, can do much more than splitting an atom ever could. Don’t get me wrong, the idea of near-endless energy is a tempting one, and would be a real game-changer, but even then the act of atom-splitting would be far less ubiquitous. Similarly, the processes are very different processes: triggering an atomic chain reaction v/s making an algorithm self-aware (if that’s even what it takes? Or, indeed, if that is even possible?). The former had been theorised and calculated ahead of its practical implementation; the latter is still shrouded in diffused philosophy of mind. They become absurd to compare. Secondly, I think it is safe to say that AI will be far more ubiquitous than nuclear reactors, or weapons, ever could be. It is a much more flexible tool, and its myriad potential areas of application change how we will relate to what it can and can’t do, or what it should or shouldn’t do. None of the above even touches on the context in which either of these technologies was created: who owns them, who controls them, and so on.
In other words, it is time to abandon clunky analogies that seem appropriate at first glance. It is also time to leave entrenched ideological positions where the only outcomes that seem to ‘matter’ are whether AI will turn out to be lawful good or chaotic evil. Viewing emergent technologies through such a binary lens completely absolves those responsible for whatever happens next of any responsibilities, instead shifting it to the AI as the primary actor. This is, of course, not true. It recontextualises technological development as some uncontrollable force of nature; a physical force akin to gravity, electromagnetism, or the weak and strong nuclear forces. There is not much we can do from this perspective, beyond building enough shelters that we can live with these cosmic winds of ‘progress’; or, perhaps worse yet, usher in your new-fangled God no matter who is in the way. I have noted technological advancement as a natural or physical phenomenon repeatedly in my research, and it’s particularly present in this whole debate: AI’s existence is binary. Whatever happens after that is simply out of ‘our’ ‘collective’ hands.
Focusing on the worst-case/best-case outcomes is a clear giveaway, especially when the timelines appear to be some far-flung, often unspecified, future. It is worth noting that the narrative for such drastic solutions to curtail AI’s development mentioned above skips over all the time between the “now” and the “at some point in the future”. This gives the impression that we flip the switch and we’re instantly Thanos’d, creating both a sense of urgency reminiscent of a nuclear flash of light, whilst also wholly absolving anyone from the responsibility of what took place between the initial switch, and the proverbial flash of light. Not only does it absolve responsibility from those who ought to face the music, but it also robs anyone else of agency.
And, of course, if you’re on the most optimistic side of the spectrum, all of the above is a non-issue!
Understanding that technology is more often than not an inherently emergent phenomenon, that is to say, that it doesn’t exist as a stable category, but is reiterated and changed and developed, even as it exists, is critical in understanding how best to proceed. To illustrate with an example: humans have had hammers for a long time. However, a stone age hammer, a bronze age hammer, and a contemporary hammer all look very different. The materials change, the shape of the head changes, and so on. Even uses change and evolve: from hammers as a tool, to war hammers. Over time, ‘hammer’ is not a stable category. The same goes effectively for any technology. Thus, by solely looking at AI teleologically –from the perspective of its end – you miss the vast gap of everything that happens in between. Especially when the telos is at some far-flung point in the future.
And it is within this gap that the vulnerable fall.
Slavoj Žižek once said that,
‘If there is no God, then everything is permitted’. [… T]his statement is simply wrong. […] It is precisely if there is a God that everything is permitted to those […] who perceive themselves as instruments […] of the Divine Will. If you posit or perceive or legitimise yourself as a direct instrument of Divine Will, […] petty moral considerations disappear. How can you even think in such narrow terms when you are a direct instrument of God? [1]
This quote has really stood out to me throughout my research, mainly as I have observed that this sentiment is a well-established and oft-recurring phenomenon. The debate around AI: what it can and can’t do; what it might change; and perhaps most importantly, who it will affect and how, is too important a discussion to have it be dominated by the most extreme ideological positions – because that is what these are, whether you believe in the AI-salvation, or advocate for the use of WMDs. These positions have previously been called longtermist, due to their focus on the extreme long-term. This is a discussion I shan’t get into here, however.
Instead, I want to attempt to re-focus the conversation on the sort of questions that we must, truly, be asking ourselves when faced with powerful emergent technologies. As I’ve mentioned several times in this text, I support a moratorium on AI (sans nukes, however…). Still, such a moratorium is only effective if used appropriately. I have alluded, briefly, above to some critical questions, such as thinking about who this impacts – in practical terms – and what can be done to mitigate adverse impacts on such groups and communities, whilst also using the newfound potential of these technologies to bring about a more equitable and just society. These are essential things, truly, and their importance cannot be understated (though, it is often dismissed; or worse: forgotten).
I think that what is missing most in any discussion surrounding emerging technologies, especially potentially powerful technologies like AI (whether strong or weak), is a discussion around the macro perspective: what is the actual value to society of creating these technologies? It appears to be taken for granted today that these innovations exist to disrupt markets; disrupt society; disrupt the status quo. Disruption is the name of the game, and has been for a long time – though the more on-the-nose language has been dialled back, especially by larger corporations as they face more public scrutiny. None of this means, however, that the underlying attitudes have changed. STS scholar Fred Turner, I think, summarises it well when he says,
I think if you imagine yourself as a disruptor, then you don’t have to imagine yourself as a responsible builder. You don’t have to imagine yourself as a member; as a citizen; as an equal. You’re the disruptor: your job is to disrupt and do whatever it takes to disrupt. Your job is not to help build the state; [to] help build the community; [to] build the infrastructure that made your life possible in the first place. […] I think it’s a matter of how you imagine yourself, and that this is where disruption as an ideology is such a problem. It makes it very difficult to build continuity and community, and […] any egalitarian kind of society. [2]
As we can see with the latest debate around ChatGPT and the subsequent AI-hypefest, governing bodies are playing catch-up. It’s technological whack-a-mole: innovators ‘disrupt’, and legislators reactively clean up any mess left behind.
Bringing the discussion of AI risk into focus – and by extension, highlighting the core societal questions that ought to be asked more often when it comes to technological development – is a good thing, overall, but like with much else pertaining to technological development, much potential is lost by the exclusionary nature of the debate itself: navel-gazing conversations about human extinction at some undisclosed point in the future at the hands of totally-not-Skynet, whilst people are suffering right now, and many more at the risk of suffering, should these technologies be rolled out without a care beyond disruption. It may sound hyperbolic, but this is not the first time we have seen catastrophic short-term effects from short-sighted tech innovation rolled out under the banner of disruption in the name of techno-optimism. Indeed, if the past decade has shown us anything, it’s that one does not have to wait for the far-flung future to risk the existence of a whole people: the Rohingya know this well enough already. A moratorium on AI development is probably a good idea. Still, it’s equally critical that the time isn’t spent discussing nonsensical thought experiments like Roko’s Basilisk or the ‘paperclip problem’. As it stands, the public debate is handled by prophets, priests and acolytes, who are most likely LARPing a schism over techno-orthodoxy; the underlying assumptions remain shared among them. Critically, these same folks view these developments in purely teleological terms. On our road to their vision of techno-enlightenment, some of us mere mortals will have to die, but this is a sacrifice they are willing to make. As long as our AI overlords don’t snuff us out when we finally get there. After all, we are but rudimentary creatures of blood and flesh.
[1] Slavoj Zizek, in “A Pervert’s Guide to Ideology”, dir. Sophie Fiennes.
[2] Fred Turner, in “Bay Area Disrupted” by Andreas Bick.