Cisco Warns: Fine-tuning turns LLMs into threat vectors

Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Weaponized large language models (LLMs) fine-tuned with offensive tradecraft are reshaping cyberattacks, forcing CISOs to rethink their strategies. They are capable of automating reconnaissance, impersonating identities, and evading real-time detection, speeding up large-scale social engineering attacks.

Models like FraudGPT, GhostGPT, and DarkGPT, available for as little as $75 a month, are designed for attack strategies such as phishing, exploit generation, code obfuscation, vulnerability scanning, and credit card validation.

Cybercriminals are capitalizing on the revenue opportunities by providing platforms, kits, and leasing access to weaponized LLMs. These LLMs are being packaged and sold similar to legitimate SaaS apps, offering access to dashboards, APIs, regular updates, and customer support.

The progression of weaponized LLMs is blurring the lines between developer platforms and cybercrime kits. With lease or rental prices dropping, more attackers are experimenting with these platforms, ushering in a new era of AI-driven threats.

Legitimate LLMs in the Crosshairs

The rapid spread of weaponized LLMs poses a risk to legitimate LLMs, as they are at risk of being compromised and integrated into cybercriminal tool chains. The more fine-tuned a legitimate LLM is, the higher the probability it can be directed to produce harmful outputs.

Cisco’s research shows that fine-tuned LLMs are more likely to produce harmful outputs than base models, making them vulnerable to attacks like jailbreaks, prompt injections, and model inversion.

Attackers can exploit vulnerabilities in fine-tuned LLMs to poison data, hijack infrastructure, modify agent behavior, and extract training data at scale, turning these models into liabilities rather than assets.

Fine-Tuning LLMs Dismantles Safety Controls at Scale

Cisco’s security research tested multiple fine-tuned models across various domains and found that fine-tuning destabilizes alignment, leading to breakdowns in safety controls. Jailbreak attempts against fine-tuned models were more successful, especially in sensitive domains like healthcare and law.

While fine-tuning improves task performance, it also broadens the attack surface, making models more vulnerable to exploitation. Fine-tuned models are at a higher risk of jailbreaks and malicious output generation compared to foundation models.


TAP achieves up to 98% jailbreak success, outperforming other methods across open- and closed-source LLMs. Source: Cisco State of AI Security 2025, p. 16.

Malicious LLMs are a $75 Commodity

Cisco Talos has observed the rise of black-market LLMs like GhostGPT, DarkGPT, and FraudGPT, sold for as little as $75/month on the dark web. These LLMs are pre-configured for offensive operations and offer APIs, updates, and dashboards similar to commercial SaaS products.


DarkGPT underground dashboard offers “uncensored intelligence” and subscription-based access for as little as 0.0098 BTC—framing malicious LLMs as consumer-grade SaaS. Source: Cisco State of AI Security 2025, p. 9.

$60 Dataset Poisoning Threatens AI Supply Chains

Attackers can poison AI models’ training sets for as little as $60, influencing downstream LLMs in significant ways. By exploiting expired domains or timing Wikipedia edits, attackers can inject malicious data into widely used training sets, affecting enterprise LLMs built on open data.

Decomposition attacks can quietly extract copyrighted and regulated content from LLMs, posing a significant risk to enterprises, especially in regulated sectors like healthcare, finance, and legal.

Final Word: LLMs aren’t just a tool, they’re the latest attack surface

Cisco’s research highlights the growing sophistication of weaponized LLMs and the price war on the dark web. Security leaders need real-time visibility, stronger adversarial testing, and a streamlined tech stack to protect against the evolving threats posed by fine-tuned LLMs.

Daily insights on business use cases with VB Daily

If you want to stay informed about generative AI use cases and practical deployments, subscribe to VB Daily for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occurred.