
One CEO backed by Jeff Bezos has offered his own frightening prediction for the future of artificial intelligence, as his AI research company has forecast major changes in store for as early as 2027.
Artificial intelligence has undergone rapid developments in the past few years, as even the best AI models from a few years ago pale in comparison to what is available right now.
Naturally this has led to some optimistic predictions for the future of AI tech, as if it continues to scale in a similar fashion who knows what possibilities it could achieve?
Advert
One major goal that's desired by large parts of the industry is what's known as 'artificial general intelligence' (AGI), which marks the point at which AI can match and even potentially exceed the intellect and performance of humans.
This has been a dream for major industry players such as OpenAI's Sam Altman, who previously described his efforts in the AI world as creating a 'Manhattan Project' of sorts.

The CEO of Google's AI lab 'DeepMind' has theorized that this could be achieved in the next five to ten years, but Anthropic CEO Dario Amodei has now propositioned that this could actually occur as early as 2027.
Advert
Backed by major figures such as Jeff Bezos, Amodei has outlined that the development of a "powerful AI" could exceed the intellect of a Nobel Prize winner, as reported by AutoBlogging.
One thing that perhaps separates Amodei's optimism from many other key figures in the AI world is his insistence on maintaining ethical practices in its development and utilization.
"We must recognize that the future is not determined by inevitability but shaped by our actions," he proposes, adding that "these powerful tools have capabilities that frequently exceed our expectations. The challenge is managing their integration responsibly."
It's certainly easy to become frightened when posed with technology that can outsmart us, and even Geoffrey Hinton - otherwise known as the 'godfather of AI' - has argued that AGI-like technology could lead to humanity's eradication in the next few decades.
Advert

He emphasizes that while AGI could unlock significant advancements in medical fields, which is similar to what Altman's US government-powered project proposes in relation to cancer treatment, safely controlling the advancements AI makes to itself is vital to avoid the creation of dangerous or unstable technologies.
"If AI can independently conduct R&D, that is when we must elevate our safety protocols to new levels," argues Amodei, and it's definitely scary to think what could potentially happen if the AI itself is driving its own development in an unregulated or mishandled way.