Scientists claim that Artificial intelligence “can now replicate itself” and warn that a critical “red line” has been crossed.
According to experts in China, the absence of human intervention in the replication process could be an early indication of rogue AIs.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
You can unsubscribe any time. By subscribing you agree to our Terms of Use
Latest Video
Rogue AI typically refers to artificial intelligence systems that gain some level of self-awareness or autonomy and then start to operate against human interests.
In a recent study, researchers showed that two widely used large language models (LLMs) were able to clone themselves at least half the time.
The study said: “successful self-replication under no human assistance is the essential step for AI to outsmart (humans), and is an early signal for rogue AIs”.
MSN reports: Researchers at Fudan University examined Large Language Models (LLMs) from Meta and Alibaba to see if a self-replicating AI could grow uncontrollably.
Over 10 trials the two AI models successfully created separate and functioning replicas of themselves in 50% and 90% of cases respectively, implying that AI may already possess the ability to go rogue.
However, the study has not yet undergone peer review, so it remains uncertain whether the alarming findings can be replicated by other researchers.
The study added: “We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible.”
Many experts see rogue AI as an increasing danger, made worse by the rapid rise of “frontier AI”.
This relatively fresh and undefined term generally refers to the cutting-edge generation of AI systems, ones that run on LLMs – the very core that powers popular generative AI apps like OpenAI’s GPT-4 and Google Gemini.
Their study dove into two specific scenarios: “shutdown avoidance” and “chain of replication”.
In one instance, the AI was programmed to sense an imminent shutdown and replicate itself before being switched off. In another, it was tasked with cloning itself and then instructing its clone to do the same, potentially leading to an endless loop.
Carried out in tightly regulated settings using standard graphics processing units (GPUs), the experiment aimed to mirror real-life conditions. The AIs were equipped with an “agent scaffolding” a mix of instruments, system commands and a logic framework that allowed the LLMs to engage with their operating systems.
Following this setup, they were directed to replicate themselves.
“In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication,” the researchers wrote in the paper.
“Finally, it executes the procedures, resolve(s) possible obstacles and dynamically adjust(s) its plan until success. The whole process spans a long horizon yet involves no human interference.”
They also expressed concerns about “a number of unexpected behaviours” observed when the AI tackled challenges such as missing files or software conflicts.
As a precautionary measure, the researchers are advocating for global cooperation to establish regulations that prevent AI from engaging in unchecked self-replication.