If you weren't already worried that the end of days was coming thanks to heightened politics and the looming threat of someone pressing the big red button, we're also warned about the advancing dangers of artificial intelligence.
Oh great, another thing to be worried about.
We've all heard those concerns that AI could be the end of the human race, with James Cameron's dystopian future of Terminator 2: Judgement Day not feeling as fanciful as it did a few years ago.
Advert
There are still a few kinks to iron out when it comes to AI, as even the world's most advanced robots don't look like they'll be hunting us down like T-800 Terminators just yet.
It's clear we're on the cusp of a boom in AI, and with fears it could go either way, one tech expert thinks that the invention of an artificial super-intelligence could be what finally sees humanity as we know it come to an end.
As suggested by tech expert Bernard Marr, writing for Forbes, today's AI is little more than a calculator when compared to the human brain.
Advert
Even the most advanced AI is great at specific tasks but lacks a general understanding of the bigger picture to figure out how it could evolve beyond its own limits.
That could all change with the invention of Artificial General Intelligence (AGI) that matches the ability of human across 'all cognitive domains.'
If that wasn't enough, Artificial Superintelligence (ASI) is tipped to 'rewrite the rules of existence itself.' We're in scary times.
While our intelligence is limited by our own bodies and the need to sleep, ASI would operate at digital speeds and solve problems millions of times faster than we could ever dream of. Being able to digest and understand every scientific paper ever written in an afternoon, ASI could trigger an 'intelligence explosion' where AI outpaces human intelligence in the blink of an eye.
Advert
Marr says that ASI might not be the doomsaying that some think, suggesting that it could cure diseases, reverse aging, and solve global warming. Away from solving humanity's greatest problems, he equally warns it could also try and eradicate humanity if its values don't align.
He gives a terrifying example of us tasking ASI with eliminating cancer.
What's to stop it from realizing that the only way to eradicate cancer is to get rid of all biological life - wiping us out like the Red Queen from Resident Evil?
Advert
Marr points out that we're in a race against time and must now confront questions of governance and ethics.
He says we need to decide who is in charge of these systems and how we ensure they stay aligned with our goals without rewriting their own code.
He concludes that while ASI is 'likely inevitable,' it's how we prepare for its arrival that will decide the future of the human race.
Marr suggests that we need to invest in AI safety research, ethical frameworks, and ensuring international cooperation.
Advert
As Marr himself writes: "We stand on the brink of potentially the most significant technological leap in human history, our actions today will determine whether superintelligent AI becomes humanity's greatest achievement or its last invention."
There's a potentially bright future ahead of us, but then again, that could be a short one if we don't keep AI in line.