To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Home> News> AI

Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Exploit in OpenAI's chat software could cause troubling circumstances

ChatGPT and other AI models have been accused of plagiarizing content since their popularity boom, but you might now need to be worried about them stealing your data.

Since its launch in 2022, OpenAI's ChatGPT has become synonymous with AI and machine learning, allowing users to generate text, translate information, and even build a conversation with the software.

It's inevitable that the service has expanded and improved over time, and it's even got to the point where it can rather creepily message users first.

However, one dedicated hacker has revealed an exploit in ChatGPT's new 'Memory' technology that not only allows you to implant false information into its storage, but also export that to an exterior source, effectively 'stealing' user data.

Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)
Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)

As reported by Ars Technica, cybersecurity researcher Johann Rehberger initially reported a vulnerability with ChatGPT's 'Memory' feature that was widely introduced in September 2024.

The feature in question allows ChatGPT to story and effectively 'remember' key personal information between conversations that has been discussed by the user. This can include their age, gender, philosophical beliefs, and much more.

OpenAI claim that this "makes future chats more helpful," as it means that you don't have to repeat the same information and context every time you start a new conversation as the software can intelligently 'remember' who you are.

The issue with this is that Rehberger realized that you could create and permanently store new fake memories within ChatGPT through a prompt injection exploit.

He managed to get ChatGPT to believe that he was 102 years old and lived in the Matrix, alongside having the chatbot convinced that the earth is flat - something even flat earthers aren't even good at!

The troubling aspect beyond this is that Rehberger, in an extensive proof of concept, was able to export these fake memories to an external website, effectively stealing the data that would otherwise remain private.

While OpenAI initially dismissed Rehberger's report showing the ability to create false memories, the company has since issued a patch that prevents ChatGPT from moving information from it's server. The ability to create false memories, however, still remains.

This issue raises continued concerns about the security of AI software like ChatGPT, and that sentiment is shared across social media too.

A recent post on the r/ChatGPT subreddit expresses these worries too.

The poster posits the question of whether anyone else is "concerned about how much ChatGPT (and more importantly, OpenAI) know about you," and this recent security flaw certain emboldens these fears.

Some are willing to gloss over any issues though, with one commenter claiming that "it crosses my mind on occasion, but given the internet already knows so much about me, I think the good ship Privacy has already sailed."

Considering there's worries that even our air fryers are selling our data, perhaps ChatGPT isn't the only place we should be looking.

Featured Image Credit: SEBASTIEN BOZON / Contributor / Chad Baker / Getty