• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Home> News> AI

Published 12:04 11 Nov 2024 GMT

Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Exploit in OpenAI's chat software could cause troubling circumstances

Harry Boulton

Harry Boulton

Featured Image Credit: SEBASTIEN BOZON / Contributor / Chad Baker / Getty
Cybersecurity
ChatGPT
AI
Tech News

Advert

Advert

Advert

ChatGPT and other AI models have been accused of plagiarizing content since their popularity boom, but you might now need to be worried about them stealing your data.

Since its launch in 2022, OpenAI's ChatGPT has become synonymous with AI and machine learning, allowing users to generate text, translate information, and even build a conversation with the software.

It's inevitable that the service has expanded and improved over time, and it's even got to the point where it can rather creepily message users first.

Advert

However, one dedicated hacker has revealed an exploit in ChatGPT's new 'Memory' technology that not only allows you to implant false information into its storage, but also export that to an exterior source, effectively 'stealing' user data.

Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)
Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)

As reported by Ars Technica, cybersecurity researcher Johann Rehberger initially reported a vulnerability with ChatGPT's 'Memory' feature that was widely introduced in September 2024.

The feature in question allows ChatGPT to story and effectively 'remember' key personal information between conversations that has been discussed by the user. This can include their age, gender, philosophical beliefs, and much more.

Advert

OpenAI claim that this "makes future chats more helpful," as it means that you don't have to repeat the same information and context every time you start a new conversation as the software can intelligently 'remember' who you are.

The issue with this is that Rehberger realized that you could create and permanently store new fake memories within ChatGPT through a prompt injection exploit.

He managed to get ChatGPT to believe that he was 102 years old and lived in the Matrix, alongside having the chatbot convinced that the earth is flat - something even flat earthers aren't even good at!

The troubling aspect beyond this is that Rehberger, in an extensive proof of concept, was able to export these fake memories to an external website, effectively stealing the data that would otherwise remain private.

Advert

While OpenAI initially dismissed Rehberger's report showing the ability to create false memories, the company has since issued a patch that prevents ChatGPT from moving information from it's server. The ability to create false memories, however, still remains.

This issue raises continued concerns about the security of AI software like ChatGPT, and that sentiment is shared across social media too.

A recent post on the r/ChatGPT subreddit expresses these worries too.

The poster posits the question of whether anyone else is "concerned about how much ChatGPT (and more importantly, OpenAI) know about you," and this recent security flaw certain emboldens these fears.

Advert

Some are willing to gloss over any issues though, with one commenter claiming that "it crosses my mind on occasion, but given the internet already knows so much about me, I think the good ship Privacy has already sailed."

Considering there's worries that even our air fryers are selling our data, perhaps ChatGPT isn't the only place we should be looking.

Choose your content:

2 mins ago
an hour ago
2 hours ago
3 hours ago
  • Thomas Fuller/SOPA Images/LightRocket via Getty Images
    2 mins ago

    National organization slams latest x-rated ChatGPT update with warning to users

    The OpenAI chatbot will introduce an 'erotica' update

    News
  • Orhan Turan / Getty
    an hour ago

    ChatGPT reveals horrifying timeline for when 'dead internet theory' will take hold

    It asserts when the internet will be taken over by bots and AI

    News
  • Catherine Falls Commercial/Getty Images
    2 hours ago

    Identical twin study shows shocking results after one twin smoked while the other didn't

    It details the effects smoking has on your appearance

    Science
  • MoMo Productions via Getty
    3 hours ago

    Government worker loses access to 'nuclear secrets' after storing 'robot porn' on work PC

    Over 187,000 images were found on the computer

    News
  • Exactly how ChatGPT flagged eerie question asked by arrested 13-year-old to police
  • Controversial new app hits over 1m downloads in less than five days as it dwarfs ChatGPT
  • Shocking data reveals 'true cost' of being polite to ChatGPT
  • ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people