Phishing emails used to be somewhat easy to spot - unfamiliar sender emails, several typos, and the gut feeling that something was just off.
But now, scam emails are bypassing spam folders and getting harder to detect.
The National Cyber Security Centre (NCSC) warns that scam emails are to appear even more convincing with the continuing rise of AI.
Advert
The NCSC, part of the GCHQ spy agency, suggests that the increasing intelligence of AI tools will make it more difficult for people to spot phishing messages, whereby individuals are deceived into giving away their passwords and personal information.
More specifically, Generative AI is widely, and easily, available to the public through platforms like ChatGPT. The intelligent tool can be used to create realistic text, voice, and images, meaning almost anyone could use it to their advantage to trick unsuspecting people.
“Highly capable state actors are almost certainly best placed among cyber threat actors to harness the potential of AI in advanced cyber operations,” the NCSC report said.
Advert
The sophistication of AI 'lowers the barrier' for amateur cybercriminals and hackers, and they can go as far as extracting sensitive data and demanding a cryptocurrency ransom, the report added.
The NCSC also predicts that AI will 'almost certainly' increase the volume and impact of cyber-attacks in the next couple of years. Therefore, ransomware attacks that previously targeted the British Library and Royal Mail over the past year are expected to rise.
In the Centre's report, it wrote: 'To 2025, generative AI and large language models will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.'
Advert
Tech giant, Microsoft, claimed: 'Scammers see AI tech as a gold mine for phishing schemes.
'ChatGPT understands about 20 languages, so cyber criminals can create more in-depth, grammatically correct emails in a variety of languages that are harder for both spam filters and the average individual to catch. And email is just the beginning.'
Despite the increasing risk, the NCSC also stated that AI is a double-edged sword - it could also be used in beneficial ways. For example, AI could also work as a defensive tool, with the technology able to detect potential attacks and design more secure systems.
'It can look for and analyse message context and identify anomalies that signal phishing attacks,' Microsoft concluded.
Advert
Furthermore, the UK government introduced the 'Cyber Governance Code of Practice' which placed information security on the same level of importance as financial and legal management.