LLMjacking Surge: Hackers Exploit Stolen DeepSeek Access to Bypass Costs and Restrictions
Sophisticated LLMjacking operations have successfully hijacked access to DeepSeek’s AI models just weeks after their public release, marking a new escalation in the unauthorized use of AI resources.

Sophisticated LLMjacking operations have successfully hijacked access to DeepSeek’s AI models just weeks after their public release, marking a new escalation in the unauthorized use of AI resources.
LLMjacking, similar to cryptojacking and proxyjacking, involves cybercriminals exploiting stolen credentials to access high-cost large language models (LLMs) like those from OpenAI and Anthropic—allowing them to generate images, bypass regional restrictions, and more while offloading the expenses onto unsuspecting victims.
Recent findings from cybersecurity firm Sysdig reveal that attackers integrated access to DeepSeek-V3 within days of its Dec. 26 release, and DeepSeek-R1 was compromised just 24 hours after its Jan. 20 launch.
"This isn't just a passing trend anymore," says Crystal Morin, a cybersecurity strategist at Sysdig. "LLMjacking has evolved far beyond where it was when we first detected it last May."
How LLMjacking Works
Running LLMs at scale can be prohibitively expensive. For instance, continuous use of GPT-4 could cost an account holder over $500,000 annually, according to Sysdig's estimates—though DeepSeek’s models are significantly cheaper.
To sidestep these costs, hackers steal cloud service credentials or API keys linked to AI platforms. They then use scripts to verify that these stolen credentials grant access to desired models before incorporating them into OAI reverse proxies (ORPs)—tools that obscure illicit LLM activity.
ORPs have evolved since their initial creation in April 2023, adding stealth features such as password protection, logging obfuscation, and Cloudflare tunnels that generate temporary domains to hide their true locations. These tools are actively shared in underground forums, including 4chan and Discord communities, where users leverage illicit AI access for NSFW content, malicious scripts, or even school assignments. Additionally, users in countries like China, Iran, and Russia use ORPs to circumvent national bans on ChatGPT.
The Hidden Cost of LLMjacking
While ORP developers aim to distribute usage across multiple compromised accounts to avoid detection, victims can still suffer severe financial consequences.
One ORP monitored by Sysdig leveraged 55 stolen DeepSeek API keys, along with credentials from other AI services, to distribute the load. However, even with this strategy, victims can face devastating charges.
Morin recounts an incident where an AWS user, who typically paid around $2 per month for email services, saw his bill skyrocket to $730 in just a few hours due to LLMjacking. Before he could intervene, his charges had exceeded $10,000, potentially climbing past $20,000 if AWS had not stepped in to reverse the charges.
"You can imagine the damage this kind of attack could inflict at an enterprise level," Morin warns. "If one individual can rack up tens of thousands of dollars in unauthorized AI usage, companies need to be extremely vigilant."
With LLMjacking becoming more sophisticated, businesses and individuals are urged to secure their API keys, enable cost monitoring alerts, and implement stronger authentication measures to prevent unauthorized AI access.