
One of the most influential minds in artificial intelligence has issued a warning that feels straight out of a sci-fi thriller — except it’s becoming quite real, and faster than you might have thought.
Demis Hassabis, CEO and co-founder of AI company, Google DeepMind, believes we’re just five to ten years away from developing artificial general intelligence (AGI) — machines that think, learn, and reason like humans.
In an interview with 60 Minutes, he said: “It’s moving incredibly fast”.
Advert
“I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So that's adding to the, to this exponential progress."

But as AI rapidly evolves, Hassabis is equally fascinated and alarmed by what’s emerging.
He clarified: “We have theories about what kinds of capabilities these systems will have. That’s obviously what we try to build into the architectures. But at the end of the day, how it learns, what it picks up from the data, is part of the training of these systems. We don’t program that in. It learns like a human being would learn. So new capabilities or properties can emerge from that training situation.”
Advert
One of DeepMind’s newest projects, Astra, so far looks like a jaw-dropping glimpse into what’s coming. An AI assistant that can see and hear, Astra was able to describe artworks and even create stories about them.
In response to Edward Hopper’s Automat, Astra created the following fictional backstory: “It's a chilly evening in the city. A Tuesday, perhaps. The woman, perhaps named Eleanor, sits alone in the diner. She is feeling melancholy due to the uncertainty of her future and the weight of unfulfilled dreams.”
Hassabis himself admits AI isn’t self-aware yet, but it’s evolving fast, and he doesn’t take that lightly.
In the same interview, he added: "My advice would be to build intelligent tools first and then use them to help us advance neuroscience before we cross the threshold of thinking about things like self-awareness."
Advert
Hassabis then went into the core imaginative element that he believes is still currently missing from AI systems of today: "I think that's getting at the idea of what's still missing from these systems".

Hassabis explained further: "They're still the kind of, you can still think of them as the average of all the human knowledge that's out there. That's what they've learned on. They still can't really yet go beyond asking a new novel question or a new novel conjecture or coming up with a new hypothesis that has not been thought of before."
There are additional upsides to AI, like being able to fast-track drug development or even help “end disease”.
Advert
Nevertheless, Hassabis has serious concerns about how this tech might be used, saying: "So on average, it takes, you know, 10 years and billions of dollars to design just one drug".
"We can maybe reduce that down from years to maybe months or maybe even weeks."
Still, the threat of misuse is very real.
Hassabis admitted: “One of my fears is bad actors using AI for harmful ends”.
Advert
To prevent that, Hassabis says safety guardrails and moral guidance are essential — and surprisingly, teachable.
He said: “They learn by demonstration. They learn by teaching”.
“And I think that's one of the things we have to do with these systems, is to give them a value system and a guidance, and some guardrails around that, much in the way that you would teach a child.”