To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Google removes major AI policy that could have serious implications for global war

Google removes major AI policy that could have serious implications for global war

The tech giant has been committed to responsible AI practices since 2018

Google has made some serious amendments to its AI policy, which could have severe implications around the world.

If you weren't already worried about headlines saying the end is nigh, Google has been accused of taking the training wheels off artificial intelligence by removing a major policy keeping the ever-advancing tech in line.

There are plenty of warnings out there from so-called 'Godfathers' of AI, suggesting that the potential dangers of AI’s applications outweigh the good. Yoshua Bengio has just warned how a military arms race could exploit AI, while Geoffrey Hinton has upgraded early predictions that there's a 10% chance AI could wipe us out and doubled it to 20%.

While potential what-if scenarios were enough to worry about, Google is arguably making it easier than ever for AI to be used to eradicate the human race.

As spotted by Bloomberg, Google has quietly removed the policy that's been in place since 2018 - promising to keep AI away from 'harmful' applications.

Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)
Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)

Google's AI Principles page previously said it wouldn't pursue AI applications as “technologies that cause or are likely to cause overall harm." This included weapons and surveillance tech which violate "internationally accepted norms." With the language no longer on the page, there are obvious concerns.

When Bloomberg reached out to Google for a response, it pointed the outlet to a blog post shared on February 4.

The 'Responsible AI' post explains: "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape.”

Google Senior Vice President James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind, wrote: "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.

"And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security."

After having co-led Google’s ethical AI team and now being Chief Ethics Scientist for AI startup Hugging Face, Margaret Mitchell told Bloomberg what the specific removal of the 'harm' clause could mean: "Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."

There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)
There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)

The recent emergence of China's DeepSeek has sparked concerns that OpenAI will rapidly accelerate its own AI without necessarily thinking of the long-term consequences, while others think Google altering its responsible AI principles should be setting off alarm bells. Tracy Pizzo Frey oversaw Responsible AI at Google Cloud from 2017 to 2022 and concluded with her thoughts: "They asked us to deeply interrogate the work we were doing across each of them.

"And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success."

When the AI principles were first put in place in 2018, thousands of Google employees reiterated the company "should not be in the business of war," but jump forward to 2025 and it feels like we're nudging closer to the Skynet future of the Terminator movies.

Featured Image Credit: SOPA Images / Contributor / Getty