
Studies have revealed how underground and black market AI chatbots are allowing malicious actors to earn thousands of dollars every single month with little work on their end required.
Artificial intelligence technology has been a financial miracle for those powering the revolutionary push. OpenAI - who created ChatGPT - has raised their valuation beyond $100 billion despite being a nonprofit, and companies like Nvidia have became juggernauts through the unrelenting demand for AI.
While startup companies like DeepSeek have very much rocked the boat by proving you don't need all the power or money to create a successful AI model, studies have also shown that there's plenty of money to be made underground through black market chatbots used for malicious means.
Advert
As reported by Fast Company, illicit large language models, otherwise known as LLMs, can make upwards of $28,000 in two month from sales on the black market, and allow those that purchase them to make far more through illegal means.

One study published in arXiv has outlined this clearly, examining how LLMs either based on open source tech or jailbroken from mainstream options give users the ability to conjure phishing emails or write code used for malware.
Their popularity and desirability for scammers comes from the fact that mainstream and traditional AI models like ChatGPT place restrictions on what its users can request, whereas these black market options are capable of performing just about anything.
Advert
Examples of these include DarkGPT, which costs 78¢ for every 50 messages, Escape GPT, which charges users $64.98 per month on a subscription model, and WolfGPT, which has a $150 flat fee, allowing users to keep it for life.
These tools allow users to create phishing emails up to 96% faster than any other methods, and can write the correct code around two-thirds of the time for malware that evades antivirus software.
This poses a major cybersecurity conundrum, as it dramatically increases access to tools that help extort money from innocent individuals, lowering the skill and cost required to create effective schemes.

Advert
There have already been a number of incidents where scammers have used AI chatbots and generative AI to trick people into falling in love with fake individuals and hand over thousands in cash, including one that conjured up a fake Brad Pitt, but these malicious LLMs take things to the next level.
It once again reiterates the dangers that AI can create in unrestricted environments, which XiaoFeng Wang - one of the authors of the arXiv study - illustrates as "almost inevitable," adding that "every technology always comes with two sides."
Wang added that "we can develop technologies and provide insights to help" the fight against malicious AI LLMs, "but we can't do anything about stopping these things completely because we don't have the resources."
This adds to the concerns that many have about the non-illegal side of AI too, as even the 'godfather of AI' Geoffrey Hinton has indicated that the technology lays a "fertile ground for fascism" in how it dramatically increases the wealth gap.
Advert
That wealth gap is then even seen in the criminal world, where AI makes the role of malicious individuals easier, and there's profit to be made on the side of those selling the unrestricted LLM software too.