ChatGPT and other AI models have been accused of plagiarizing content since their popularity boom, but you might now need to be worried about them stealing your data.
Since its launch in 2022, OpenAI's ChatGPT has become synonymous with AI and machine learning, allowing users to generate text, translate information, and even build a conversation with the software.
It's inevitable that the service has expanded and improved over time, and it's even got to the point where it can rather creepily message users first.
Advert
However, one dedicated hacker has revealed an exploit in ChatGPT's new 'Memory' technology that not only allows you to implant false information into its storage, but also export that to an exterior source, effectively 'stealing' user data.
As reported by Ars Technica, cybersecurity researcher Johann Rehberger initially reported a vulnerability with ChatGPT's 'Memory' feature that was widely introduced in September 2024.
The feature in question allows ChatGPT to story and effectively 'remember' key personal information between conversations that has been discussed by the user. This can include their age, gender, philosophical beliefs, and much more.
Advert
OpenAI claim that this "makes future chats more helpful," as it means that you don't have to repeat the same information and context every time you start a new conversation as the software can intelligently 'remember' who you are.
The issue with this is that Rehberger realized that you could create and permanently store new fake memories within ChatGPT through a prompt injection exploit.
He managed to get ChatGPT to believe that he was 102 years old and lived in the Matrix, alongside having the chatbot convinced that the earth is flat - something even flat earthers aren't even good at!
Advert
The troubling aspect beyond this is that Rehberger, in an extensive proof of concept, was able to export these fake memories to an external website, effectively stealing the data that would otherwise remain private.
While OpenAI initially dismissed Rehberger's report showing the ability to create false memories, the company has since issued a patch that prevents ChatGPT from moving information from it's server. The ability to create false memories, however, still remains.
This issue raises continued concerns about the security of AI software like ChatGPT, and that sentiment is shared across social media too.
A recent post on the r/ChatGPT subreddit expresses these worries too.
Advert
The poster posits the question of whether anyone else is "concerned about how much ChatGPT (and more importantly, OpenAI) know about you," and this recent security flaw certain emboldens these fears.
Some are willing to gloss over any issues though, with one commenter claiming that "it crosses my mind on occasion, but given the internet already knows so much about me, I think the good ship Privacy has already sailed."
Considering there's worries that even our air fryers are selling our data, perhaps ChatGPT isn't the only place we should be looking.