To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

AI could explain why we're not meeting any aliens, a new study claims

AI could explain why we're not meeting any aliens, a new study claims

One of the universe's bigger mysteries seemingly explained.

There are a whole heap of theories out there to explain why, despite the probes we've sent and the messages we've beamed out into space, we don't seem to have actually met any evidence of alien civilizations.

Now a new research paper has been published on Science Direct, which theorises that our ongoing experimentation with artificial intelligence (AI) models of increasing sophistication could be a key part of that silence.

It all hinges on a theory known as the "Great Filter".

This theory posits the idea that in order to explain the lack of aliens out there for us to observe, there must be some natural point at which planetary civilizations get hit by some sort of progress blocker or destruction which prevents them from becoming truly spacefaring.

 A new research paper on Science Direct explains it all, and it all hinges on a theory known as the "Great Filter" / Apostoli Rossella / Chris Clor / Getty
A new research paper on Science Direct explains it all, and it all hinges on a theory known as the "Great Filter" / Apostoli Rossella / Chris Clor / Getty

This means that other civilizations do exist or have existed, but some set of circumstances have stopped them in their tracks, and that drastically fewer civilizations make it past this so-called filter.

According to older reasoning, and depending on the great fear of the day, that filter might have been a planetary nuclear war, plagues, asteroid strikes or any number of other potentially extinction-level events.

Now, though, a paper from Michael Garrett, of the Department of Physics and Astronomy at the University of Manchester in the UK has asked whether Artificial Super Intelligence (ASI) should be added to that list because of the risk that it could destroy humanity.

Compared to the AI models we have now, ASI would far outstrip human computational ability and indeed intelligence, and it's something that some AI researchers think is far closer than you might assume, potentially arriving in the next few years.

Garrett argues that we desperately need "regulatory frameworks for AI development on Earth and the advancement of a multi-planetary society to mitigate against such existential threats".

This is an argument that would chime with statements that we've heard from the likes of Elon Musk, whose entire obsession with populating Mars is based on the idea that humanity cannot be tied to Earth alone if it wants to survive potentially lethal events.

Of course, given that Musk owns xAI and is actively chasing AI development himself, it's not quite as simple as all that, but the logic is still established.

As Garrett puts it, "a multi-planetary biological species could take advantage of independent experiences on different planets, diversifying their survival strategies and possibly avoiding the single-point failure that a planetary-bound civilization faces".

Of course, the question of whether this paper persuades anyone in a position of power to actually do something and put a framework in place is anyone's guess.

It can often feel as though the cogs of power turn too slowly to get quick and reactive measures in place when technology is developing at the breakneck pace exhibited by the AI sector recently.



Featured Image Credit: Getty/MR.Cole_Photographer/David Wall