Depending on how optimistic a person you are, the boom in artificial intelligence (AI) is either the best thing to ever happen to us - or will lead to humanity's eventual downfall.
Academic Eliezer Yudkowsky firmly falls in the latter camp, and he predicts that the AI apocalypse might come sooner than you think.
"If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10," he recently told The Guardian.
Advert
California-based Yudkowsky is a researcher based in California with an established history of speaking out against the rise of AI (occasionally controversially).
Yudkowsky seems to feel like we're barrelling towards some risks that have famously been shown in books and films - including the point of no return where AI becomes self-sufficient and decides that humanity actually isn't necessary any more.
In the here and now, though, Yudkowsky and other experts who spoke to The Guardian are really concerned with the job losses and downturns in quality that AI might bring with it.
Advert
The core question they want to ask is why people (and businesses) couldn't simply choose not to pursue AI, if one of the outcomes could be the loss of jobs.
In fact, Yudkowsky wants things to go further than this, since that that would rely on businesses making ethical choices.
He said: "You could say that nobody’s allowed to train something more powerful than GPT-4. Humanity could decide not to die and it would not be that hard."
Advert
But considering how quickly AI is gathering pace, we can't seem to see a world where that would happen - particularly since many uses of AI are currently based around maximising margins and profits.
There are a heap of interesting arguments in the full piece from a variety of sources, and even a moment where Yudkowsky clarifies one of his more polarising past viewpoints - that people should be prepared to hit rogue data centers with an air strike as part of an effort to halt the rise of AI.
Now, he says, he'd be more careful with his words - although that's noticeably not a total walking-back of his original point.