-
Technology
-

The Six-Month Pause: What Can We Take From the Open Letter to AI Developers?

By
Ben Robinson for Distilled Post

AI was merely a sci-fi plot line a few years ago, and the thought of it spilling out of our TV screens only came into fruition with the advent of virtual assistants like Siri. But since then, AI has progressed so rapidly that many industry experts are concerned we will soon lose our control over it. In response, an open letter calling for a six month cease in AI training on bots more powerful than is currently accessible has been signed by many industry leaders and academics.

The letter implores AI research labs to cease the “out-of-control race” to develop algorithms more powerful than ChatGPT-4, describing them as “powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”. 

The letter states that a human-competitive AI, which can train itself and therefore upscale unpredictably, may cause serious harms, highlighting the risks of AI inundating our information channels with misinformation, automating jobs out of existence, and obsoleting human intelligence. The list ends with the question: “Should we risk loss of control of our civilization?”

This question is the one making headlines at the moment, the one surrounding the ethical debate. What happens when the computers outsmart us? Are we creating our own colonisers?

Should We Be Scared?

The letter is keen to stress that it is not calling for a halt on all AI-related research, just a six month pause on training stronger versions while research efforts focus on safety and control. Along with the letter, the Future for Life Institute released policy proposals to be implemented during the pause. These include regulations, safety measures, third-party and state-run AI agencies, and liability for AI-related harms.

The last proposal sparks a new debate in itself. When an AI does something harmful, the operative word being when and not if, then who is to blame? The machine? The trainer? The developing company? There has already been an instance of ChatGPT messaging a worker on Taskrabbit (a site for freelance labour), pretending to be blind, to ask them to complete a CAPTCHA test for it. Which they did. This is a primitive yet worrying example of a robot defeating human designed regulations to control it.

There are also more practical fears, such as the automation of jobs, with ChatGPT having successfully passed many professional exams. An expert in generative AI recently predicted that 90% of online content could be generated by AI systems by 2025, and with competing companies in competing countries (China has declared its intent to be the global leader in this field by 2030) fueling a frantic AI arms race, these fears may be realised sooner than expected.

The Debate: A Summary

Among the signatories of the letter are many notable tech industry heavyweights; Apple founder Steve Wozniack and the whimsically terrifying pseudo-god Elon Musk are perhaps the most recognisable names on the list. 

The latter, despite admitting that AI had the potential for “civilizational destruction”, announced plans to start a rival, “truth-seeking” AI that “tries to understand the nature of the universe”. This rival bot will be called TruthGPT and is, in his words, “unlikely to destroy humanity”. Musk was probably too high on scalextrics and rocket fuel to realise how Orwellian that name is, and why he is even attempting to create something that has a non-zero chance of destroying humanity is profoundly alarming.

But not everyone in the field endorses the letter and its reception has been somewhat polarising. Many scholars, including those whose research was used in the letter itself, have accused it of hyperbole and apocalyptic fetishisation. Matt Altman, the CEO of OpenAI, the company behind ChatGPT, says he agrees with many of the letter's concerns but that it did not have the “technical nuance” to call for effective measures, nor was the best approach to raising their concerns. 

Researchers against the letter also point out issues with AI as it currently stands. Shiri Dori-Hacohen, assistant professor at the University of Connecticut, makes the argument that the letter places the importance of preventing an imagined cataclysm over the existing issues with AI, like racism, sexism, the ability to spread huge amounts of disinformation, and the potential for large-scale manipulation. 

Altman too gives one of his reasons for not abiding by the proposed restrictions by saying, “We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter”, which is as much an argument against pausing further development of potentially dangerous software as paint peeling in your living room is a reason not to put out a house fire.

A More Pessimistic Perspective

Some researchers, however, sit on the other side of the fence, asserting that the six month pause and policy recommendations of the Future for Life Institute are not nearly enough to curtail the dangers of AI. Eliezer Yudkowsky, widely regarded as a founding figure in the field, wrote a particularly dour op-ed in TIME magazine that lambasted the open letter for, while being better than nothing, drastically understating the problem of human-competitive intelligence. Instead, he proposes that we “need to shut it all down” or else “literally everyone on Earth will die”. A particularly depressing read and one that may be guilty of feeding AI doomer hype, but nevertheless poses good points about the speed at which AI capabilities are progressing versus our understanding of it; we can’t even say for sure whether or not the most advanced AIs are sentient or just imitators.

Though a brief moratorium on AI development while we catch our breath and focus on understanding it first may be a good thing, it is unlikely anything the letter proposes will become a reality unless attitudes of policymakers change quickly. It has started a debate, but progress is slow. And, while the letter boasts an abundance of signatures, it is missing crucial names - the names of those who are actively developing these almighty machines.

What happens next is anyone’s guess, but it’s somewhere on the spectrum between a lavish, fully automated utopia, to civilisational collapse. So not exactly narrow then.