Microsoft Azure: "Shut it all down": Machine learning researcher Eliezer Yudkowsky sounds alarm on AI evolution

Apr 11, 2023 | Posted by MadalineDunn

As AI continues its rapid evolution, machine learning researcher Eliezer Yudkowsky is sounding the alarm, arguing that AI should not only be paused, as recommended by the Future of Life Institute, but stopped entirely. Yudkowsky has been a vocal critic of AI for over two decades, and says that without immediate intervention, the dystopian future they have been warning against for decades will become a reality. 

The Future of Life Institute's recent open letter proposes that there should be a six-month moratorium on AI labs, citing concerns around "human-competitive" AI, and has been signed by the likes of SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, Andrew Yang. The AGI researcher, however, while outlining respect for the signatories, said he refrained from doing so as it does not go far enough. "I think the letter is understating the seriousness of the situation and asking for too little to solve it," he said in an opinion piece for Time. Further, Yudkowsky said that the key issue is not "human-competitive" intelligence but, instead, what happens when AI supersedes human intelligence. 

In a chilling statement, Yudkowsky said: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." Further to this, Yudkowsky says that in order to survive AI that is smarter than humans, precision and preparation and new scientific insights, are required and also, he says, "probably not having AI systems composed of giant inscrutable arrays of fractional numbers." Yudkowsky also outlined that currently, it is "imaginable" that a research lab would "cross critical lines" and create this kind of AI, without even noticing.

Without this, Yudkowsky makes some deeply disconcerting projections for the evolution of AI, and suggests that a sufficiently intelligent AI won't "stay confined to computers for long," citing the ability to email DNA strings to laboratories, as method through which, hypothetically, AI could produce proteins on demand, allowing it to stretch out into the physical world. Regarding AI sentience, Yudkowsky says that considering "how little insight" we have into these systems' internals, "we do not actually know." He highlighted the moral implications of this. 

Yudkowsky's solution to the dystopian future of "hostile" superhuman AGI is to "just shut it all down," by any means necessary. Six-months without a plan around how to deal with AI is not enough, says Yudkowsky. 

Yudkowsky instead suggests that all the large GPU clusters should be shut down, alongside all the large training runs. "Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms, he said, adding that there should be "no exceptions for governments and militaries." "Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere.Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike."

His stark warning is that we are not ready: "We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Yudkowsky has been a long-time critic of AI, called by some an AI doomer, and with such dramatic statements, it's easy for one's default reaction to be one of disbelief. However, when it comes to the AI containment issue, he is not alone in his grim predictions, and increasingly the risks of having no international regulatory framework around AI are becoming clearer. Not that this has stopped the likes of Microsoft from pressing ahead with GPT-4. 

On top of this, in recent weeks, ex–Google CEO Eric Schmidt, who had previously raised concerns about AI's development, said he didn't support even a sixth-month moratorium because it would give countries, like China an edge. Others, such as Alphabet CEO Sundar Pichai, have said they support regulation over a moratorium, while, Microsoft's founder Bill Gates, has also rejected the call to stop AI development, citing its "exciting," potential to address some of the world's "worst inequities." Overall, though, the general consensus is that the pause many believe is necessary, is unlikely to happen, and the train has already left the station. Only time will tell. 



0 Comments