LLM Hijackers Quickly Incorporate DeepSeek API Keys

Following the recent public release of DeepSeek models, sophisticated “LLMjacking” operations have successfully gained unauthorized access to these models.

LLMjacking is a form of cybercrime, similar to proxyjacking and cryptojacking, where individuals exploit someone else’s computing resources, particularly large language models (LLMs) from companies like OpenAI and Anthropic, for their own purposes without authorization, passing on the costs to others.

Recent observations by researchers from Sysdig have revealed active LLMjacking operations targeting models developed by DeepSeek. Within days of the release of DeepSeek-V3 and DeepSeek-R1 models, unauthorized access was obtained by attackers.

Sysdig cybersecurity strategist Crystal Morin emphasizes the concerning escalation of LLMjacking activities, indicating a significant increase since the initial discovery in May.

How LLMjacking Works

The utilization of LLMs at scale can be costly. For instance, constant usage of GPT-4 could incur significant expenses, but services like DeepSeek offer more cost-effective alternatives.

Related:Researcher Outsmarts, Jailbreaks OpenAI’s New o3-mini

To avoid these expenses, attackers steal credentials for cloud service accounts or API keys associated with LLM applications. They then employ scripts to verify access to desired models using the stolen information.

The stolen authentication details are integrated into an “OAI” reverse proxy (ORP) to establish a secure connection between users and LLMs.

ORPs, originating from a publication in April 2023, have evolved with additional security features like password protections and obfuscation mechanisms to conceal illicit usage. Cloudflare tunnels further protect proxies by generating temporary domains to hide actual server addresses.

Communities on platforms like 4chan and Discord have emerged around ORPs, facilitating the creation of various content using illicit LLM access, including NSFW material, scripts, and evading national bans in countries like Russia, Iran, and China.

Related:‘Constitutional Classifiers’ Technique Mitigates GenAI Jailbreaks

The Cost of LLMjacking to Account Holders

Ultimately, the expenses incurred by the unauthorized use of computing resources fall on the account holders.

ORP developers aim to minimize costs to avoid detection of anomalous activities, utilizing multiple sets of credentials associated with different accounts for load balancing.

However, there are instances where unauthorized usage results in significant bills for victims, as seen in a case where an individual’s AWS bill spiked due to LLMjacking.

Despite the financial impact, AWS intervened to assist the victim, highlighting the potential consequences of similar attacks on a larger scale.

A tweet from someone whose AWS bill jumped 40,000% in just hours due to LLMjacking