Utopian and dystopian scenarios of AI do not lead to concrete regulations

Viewing AI through a lens of the distant future undermines the need for political action in the present to tackle the challenges posed by systems like ChatGPT.

OpenAI’s Large Language Models (LLMs) ChatGPT, released in November 2022, and its successor GPT-4, released in March 2023, have attracted a lot of attention from the general public, raising both promises and concerns.

In view of this development, the Future of Life Institute published an Open Letter to “Pause [the] Giant AI Experiment” on March 28. The authors of the letter call for a moratorium on the training of large language models for six months, raising the scenario of a “superintelligence” leading to the extinction of humanity – also known as existential risk or x-risk. According to Future of Life Institute co-founder Jann Tallinn, rogue AI poses a greater danger than climate crisis.

The letter notes: "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

There has been a lot of criticism and disagreement with the letter, most notably from the Distributed AI Research Institute (DAIR), founded by Timnit Gebru. This is because the framing of the letter’s narrative is meant to frighten people while marketing the “too powerful tools” that need to be tamed, which overall fuels the AI hype.

The ideology underlying the concerns expressed by the Future of Life Institute is so-called longtermism. Its goal is to maximise human well-being in the decades, if not centuries or millennia to come, at the expense of the present. Famous proponents of longtermism are now disgraced FTX CEO Sam Bankman-Fried, Twitter and SpaceX CEO Elon Musk, controversial entrepreneur Peter Thiel and transhumanist philosopher Nick Bostrom, who notes: “the expected value of reducing existential risk by a mere one millionth of one percentage point is at least ten times the value of a billion human lives”.

The racist backdrop of longtermism takes the form of what Abeba Birhane calls “digital colonialism”, which repeats centuries of oppression for the benefit of an elite of tech billionaires’ vision of “the good for humanity”, which includes colonising space or transcending our mortality.

However, this techno-utopianism, for which a “safe AI” is necessary for the singularity so desired, distracts from the pressing current issues.

Hidden costs of ‘superintelligence’

While these systems appear to be “autonomous” and “intelligent”, they still rely on extensive human labour. As Kate Crawford shows, this starts with the extraction of minerals and the manufacturing of hardware. Next, data, which is often scraped without any consent, needs to be labelled in order to give it meaning, and offensive, sexual or violent content needs to be flagged.

This exploitative, psychologically distressing and underpaid work takes place invisibly in the background. So instead of “automating away all the jobs”, the result is a worsening of social inequalities and a centralisation of power.

Another problem with the idea of a “superintelligence” is that it gives the false impression that LLMs are sentient-like entities that understand and perhaps even have feelings or empathy. As a result, people tend to rely too much on the output of LLMs, as in the tragic case that drove a man to suicide after interacting with a chatbot for several weeks. Another medical chatbot that makes use of GPT-3 also suggested committing suicide or starting to recycle to overcome sadness. The latter sounds nonsensical. But LLMs merely stitch together words that sound plausible, which can result in absurd, inaccurate, harmful and misleading output, such as an article mentioning the benefits of eating crushed glass.

Given this, the question might arise as to what justifies the use of GPT. What problem do LLMs try to solve? Also, consider that they consume energy at an “eye-watering” rate – it is estimated that just training a single model like GPT-3 consumes as much electricity as 120 US households per year and produces carbon dioxide emissions equivalent to those of 110 cars per year.  

Need for transparency and accountability

So, the underlying idea to stop the further training of LLMs for governance seems favourable. The open letter, however, does not state who would be affected by this pause and how to enforce or ensure it. It is naive to think that every company, university, research institute, or any individual making use of various open-source alternatives, will simply stop.

At the same time, the LLMs currently running will persist with their implications. And yet Microsoft, which has made a multi-billion investment in OpenAI, and Twitter CEO Elon Musk, who donated $10 million to the Future of Life Institute and is their board member, have fired their ethics teams.

As a first response, Italy banned ChatGPT a few days ago and other European countries are considering doing the same. Yet, it is unclear how this ban will affect other applications that make use of LLMs like ChatGPT or GPT-4.

In view of the various downstream effects, the vague and apocalyptic scenarios that reach far into the future, as portrayed by the Future of Life Institute, do not lead to concrete political measures and regulations that are currently required – especially not in the proposed timeframe of six months.

Rather, if the political agenda is driven by the idea of a "superintelligence" controlling humanity, there is a danger that current risks, as well as current solutions, will be ignored. And although LLMs do not pose an existential risk to “our” civilisation, they do to a large, especially already marginalised, part of it.

Even if we want to sustain the idea of a “superintelligence”, it should not be the dominant narrative that is focused on now. Because if you portray these models as overly powerful and ascribe some kind of agency to them, you shift the responsibility away from the companies that develop these systems.

In order to hold companies accountable, transparency is needed about how these LLMs were developed and with which data they were trained. But instead, OpenAI, which contrary to its name is now closed-source, states in its so-called “technical report” on GPT-4 that “this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar”.

This secrecy hinders democratic decisions and thus regulations on the conditions under which LLMs should be developed and deployed. There is no such thing as “the one good AI”; we should therefore not trust a comparatively small and privileged group of people who believe that a “superintelligence” is inevitable – which is not the case – with how to build “safe” AI.

Instead, we need to start by engaging different people, especially the aggrieved, in the conversation to change the narrative and power relations.

Route 6