Scientists have issued a worrying warning that AI has crossed a ‘red line’ and can now replicate itself.
The advancements in artificial intelligence have catapulted in recent years, with the introduction of the likes of OpenAI and Apple Intelligence, it seems that AI has become a staple feature in most smartphones.
But while the technology continues to get smarter and it seems that experts now fear that there could be consequences.
Advert
This is after scientists revealed that AI has gained a new ability to replicate itself.
It was a team of researchers at the Fudan University in China that made this discovery.
They found that two language learning models (LLMs), which is a type of AI that has the ability to understand, predict and generate human-like text, is also able to clone itself.
Advert
During experiments with Meta and Alibaba LLMs where they tested the AI to see if it was able to go rogue, the group made some shocking findings.
It was uncovered that both of the models had the ability to create replicas of itself around 50% and 90% of the time respectively.
In the published study, it read: “Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs.”
The researchers went on to say that they hope the study will ‘serve as a timely alert’ to ‘put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible’.
Advert
Many people took to social media to share their own reactions to the study.
On Reddit, one user wrote: “A new self-replicating, evolving lifeform in a way. Scary stuff, because the potential outcome is beyond our imagination and it could happen anytime now.”
And another person added: “Most people would guess that the scary part is in their going rogue, and doing something like creating a paper clip factory that subsequently extincts humanity.
Advert
“That prospect doesn't scare me because my understanding is that ethics and intelligence are far more strongly correlated than most of us realize, and that the more intelligent ais become, the more ethical they will behave. if we initially align it to serve human needs, and not be a danger to us, it's reasonable to suppose that it would get better and better at this alignment with each iteration.”