
As the human race leans on artificial intelligence more than ever, it feels like we're nudging closer to a supposed technological apocalypse. We've seen enough Terminator movies to know what happens when AI goes wrong, and while Skynet is thankfully just a work of fiction, there are plenty of real-life companies that could take up its legacy.
You know things are bad when the so-called 'Godfathers of AI' are warning us about what it could do, and with artificial intelligence supposedly having the capabilities of wiping out the human race in the next two years, we're right to be worried.
AI itself is pitched as the catalyst that triggers World War 3, and it could already be too late, thanks to warnings that AI might try and kill humans who want to shut it down.
Still, here we are as the likes of Elon Musk's xAI and Sam Altman's OpenAI continue to boom. Unlike Musk's controversial Grok, ChatGPT is seen as the more tame option.
Advert
However, Altman's grand plans to release 'AI agents' could be pushing us closer to an apocalyptic 'AI 2027' prophecy.
Away from Nostradamus and Baba Vanga predicting our end of days, 'AI 2027' is a theory put forward by researchers and experts, claiming that artificial superintelligence will emerge in the next two years.

This could be the beginning of the end, with artificial superintelligence models potentially able to work toward their own goals that are 'misaligned' with the human race. After all, if AI realized humanity is the biggest threat to the planet, what's to stop it from eliminating us for the good of the world?
Advert
OpenAI has hyped ChatGPT Agent as being able to work for you by using its own computer, and for those worried about giving AI its own tools, you aren't alone.
Agents are machine learning tools that can perform multi-step tasks, currently able to do the likes of calendar planning and creating financial presentations.
As reported by Wired, ChatGPT Agent lead Lisa Fulford used it to order very specific cupcakes, with the task taking about an hour. She praised it, saying she simply couldn't be bothered to do it herself.
Altman has warned that ChatGPT Agent shouldn't be relied on for complex tasks. Posting on X, the OpenAI CEO mused: "I would explain this to my own family as cutting edge and experimental; a chance to try the future but not something I’d yet use for high-stakes uses or with a lot of personal information until we have a chance to study and improve it in the wild."
Advert
Even more concerning, Altman admitted: "Bad actors may try to 'trick' users' AI agents into giving private information they shouldn't and take actions they shouldn't, in ways we can't predict."
Over on Reddit, people were obviously doomsaying about where things could go next. Suggesting that Agent will be bigger than the much-hyped GPT-5, one person said: "I also think the same, they just released agent 0 lets see if AI 2027 will prove itself and give us agent 1 by the end of 2025."
Advert
Another added: "It’s a good example of the old saying (I’m paraphrasing) people wanted a faster horse until you showed them a car
A third concerned Redditor said: "Like others have pointed out I believe it’s Agent 0, but it’s notable, & a feather in their cap, that ai2027’s first forecast of 'stumbling agents' by mid-2025 are here."
Despite Altman saying there are strict protocols and barriers currently in place, he concludes that researchers "can't anticipate everything." If that doesn't sound like trying to distance yourself from a potential AI apocalypse, nothing does.