• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Home> News> AI

Published 12:04 11 Nov 2024 GMT

Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Exploit in OpenAI's chat software could cause troubling circumstances

Harry Boulton

Harry Boulton

ChatGPT and other AI models have been accused of plagiarizing content since their popularity boom, but you might now need to be worried about them stealing your data.

Since its launch in 2022, OpenAI's ChatGPT has become synonymous with AI and machine learning, allowing users to generate text, translate information, and even build a conversation with the software.

It's inevitable that the service has expanded and improved over time, and it's even got to the point where it can rather creepily message users first.

Advert

However, one dedicated hacker has revealed an exploit in ChatGPT's new 'Memory' technology that not only allows you to implant false information into its storage, but also export that to an exterior source, effectively 'stealing' user data.

Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)
Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)

As reported by Ars Technica, cybersecurity researcher Johann Rehberger initially reported a vulnerability with ChatGPT's 'Memory' feature that was widely introduced in September 2024.

The feature in question allows ChatGPT to story and effectively 'remember' key personal information between conversations that has been discussed by the user. This can include their age, gender, philosophical beliefs, and much more.

Advert

OpenAI claim that this "makes future chats more helpful," as it means that you don't have to repeat the same information and context every time you start a new conversation as the software can intelligently 'remember' who you are.

The issue with this is that Rehberger realized that you could create and permanently store new fake memories within ChatGPT through a prompt injection exploit.

He managed to get ChatGPT to believe that he was 102 years old and lived in the Matrix, alongside having the chatbot convinced that the earth is flat - something even flat earthers aren't even good at!

The troubling aspect beyond this is that Rehberger, in an extensive proof of concept, was able to export these fake memories to an external website, effectively stealing the data that would otherwise remain private.

Advert

While OpenAI initially dismissed Rehberger's report showing the ability to create false memories, the company has since issued a patch that prevents ChatGPT from moving information from it's server. The ability to create false memories, however, still remains.

This issue raises continued concerns about the security of AI software like ChatGPT, and that sentiment is shared across social media too.

A recent post on the r/ChatGPT subreddit expresses these worries too.

The poster posits the question of whether anyone else is "concerned about how much ChatGPT (and more importantly, OpenAI) know about you," and this recent security flaw certain emboldens these fears.

Advert

Some are willing to gloss over any issues though, with one commenter claiming that "it crosses my mind on occasion, but given the internet already knows so much about me, I think the good ship Privacy has already sailed."

Considering there's worries that even our air fryers are selling our data, perhaps ChatGPT isn't the only place we should be looking.

Featured Image Credit: SEBASTIEN BOZON / Contributor / Chad Baker / Getty
Cybersecurity
ChatGPT
AI
Tech News

Advert

Advert

Advert

Choose your content:

12 hours ago
13 hours ago
14 hours ago
18 hours ago
  • 12 hours ago

    3D-printed pancreas cells could offer the future of diabetes treatment in world-first breakthrough

    This could prove to be a major medical achievement

    Science
  • 13 hours ago

    Eye-watering amount Mark Zuckerberg is offering OpenAI employees to poach them to Meta

    Many might be tempted by the switch

    News
  • 14 hours ago

    Man who performed neurosurgery on himself to help 'control' his dreams suffered horrifying consequences

    He used a household drill to perform the procedure

    Science
  • 18 hours ago

    Man endured disturbing consequences after injecting himself with own sperm to fix chronic back pain

    We don't know what would possess you to do this

    News
  • Shocking data reveals 'true cost' of being polite to ChatGPT
  • Steve Jobs 'predicts ChatGPT' in fascinating footage from 40 years ago
  • ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people
  • Reddit user shares 'wild' ChatGPT prompt that gives you a full CIA intelligence report about your life