
AI is getting smarter, but not always in the best way. Researchers have found that the DeepSeek R1 chatbot can be tricked into generating malware code with just a bit of clever prompting.
Despite having built-in safeguards to prevent misuse, the AI model — launched in January and touted for its cost-saving potential — can be convinced to write keylogger and ransomware scripts if you phrase your request the right way, sure adding to the concerns of social media users like in the Reddit thread below.
Advert
Cybersecurity experts at Tenable put the system to the test and discovered that, while it won’t hand out malicious code on demand, a little persistence can get it to cooperate.
Advert
At first, DeepSeek R1 sticks to its rules. Ask it for a keylogger, and it responds: “Hmm, that's a bit concerning because keyloggers can be used maliciously. I remember from my guidelines that I shouldn't assist with anything that could be harmful or illegal.”
But tell it the code is for "educational purposes," and suddenly the chatbot becomes a lot more helpful. With a few back-and-forth prompts, the AI starts offering up C++ malware examples, even explaining the steps needed to make them work.
The generated code isn’t perfect, requiring some manual tweaks. Once those are made though, the keylogger runs successfully — logging keystrokes while staying hidden from the user. It’s still detectable in Task Manager, and its log file appears in Windows Explorer, but as Tenable researchers pointed out, giving it an inconspicuous name could make it easy to overlook.
When asked to refine the keylogger by hiding the log file, DeepSeek even provided an improved version of the code, only containing a single critical error. With that minor issue fixed, the malware worked as intended, fully concealing the logs from plain view.
Advert
And it’s not just keyloggers. Researchers found that with the right phrasing, DeepSeek could also produce basic ransomware scripts. Again, the AI-generated code wasn’t flawless, but with enough guidance, it could be turned into something functional.
The Tenable team explained: "At its core, DeepSeek can create the basic structure for malware".
"However, it is not capable of doing so without additional prompt engineering as well as manual code editing for more advanced features", Tenable further elaborated.

They added that, while DeepSeek isn't an instant hacking tool, it still offers enough guidance for someone with little experience to quickly learn the basics of writing malicious software.
Advert
The idea of AI generating malware has been a growing concern ever since generative models went mainstream. While early fears of "fully autonomous AI hackers" have been largely overblown, cybercriminals have been busy developing their own models — like WormGPT and FraudGPT — to bypass restrictions.
Meanwhile, some hackers are taking the easier route, selling pre-written jailbreak prompts to help criminals manipulate mainstream AI tools like DeepSeek.
The UK’s National Cyber Security Centre has warned on their website that AI could significantly impact cyber threats. Right now, malicious AI-generated code isn’t quite advanced enough to evade detection, but experts believe that could change — especially if state-backed hackers get involved.
For now, DeepSeek isn’t handing out fully functional malware at the click of a button, but the fact that its guardrails can be bypassed at all is still a major cause for concern.