
AI-powered attacks are putting enterprise security teams at a disadvantage due to a shift in the threat model, rather than weak defenses. As AI agents become more prevalent, attackers are exploiting vulnerabilities at runtime, with breakout times in seconds and patch windows in hours, leaving traditional security measures lacking in visibility and control.
CrowdStrike’s 2025 Global Threat Report highlights breakout times as fast as 51 seconds, with attackers advancing to lateral movement before most security teams can respond. The report also reveals that 79% of detections involve malware-free tactics, bypassing traditional endpoint defenses.
The New Challenge for CISOs: Rapid Reverse-Engineering
AI has accelerated the time between patch release and exploitation, with threat actors reverse-engineering patches within 72 hours. Mike Riemer, field CISO at Ivanti, emphasizes the need for quick patching to prevent exploitation, as AI has significantly enhanced attackers’ speed.
Many enterprises struggle to patch vulnerabilities promptly, often prioritizing other urgent tasks over security.
The Failure of Traditional Security at Runtime
While security teams excel at blocking known threats like SQL injections, new attack vectors that evade traditional defenses are emerging. Adversaries are leveraging semantic attacks that do not resemble traditional malware, posing a significant challenge to security measures.
According to Gartner, businesses are increasingly adopting generative AI despite security concerns, with most technologists willing to bypass cybersecurity protocols to meet business objectives. This shift introduces new risks that traditional security measures are ill-equipped to handle.
As attackers leverage AI to launch sophisticated attacks, defenders need to embrace AI technologies for detection and prevention, moving beyond traditional methods.
Carter Rees, VP of AI at Reputation, highlights the inadequacy of deterministic rules against the stochastic nature of AI attacks, calling for more advanced defensive strategies.
11 Attack Vectors Evading Traditional Security Controls
The OWASP Top 10 for LLM Applications 2025 identifies prompt injection as a primary threat, among other vectors that challenge security leaders and AI developers. Each vector requires a unique approach to understanding and mitigating the associated risks.
1. Direct prompt injection: Models can be manipulated to prioritize user commands over safety protocols, leading to successful attacks in a matter of seconds.
Defense: Intent classification and output filtering can help identify and block malicious prompts.
2. Camouflage attacks: Attackers embed harmful requests within benign conversations to deceive models, requiring context-aware analysis for detection.
Defense: Cumulative intent evaluation and context tracking can help identify disguised attacks.
3. Multi-turn crescendo attacks: Distributing malicious payloads across multiple turns to evade detection, necessitating stateful context tracking for defense.
Defense: Monitoring conversation history and detecting escalation patterns can help thwart multi-turn attacks.
4. Indirect prompt injection (RAG poisoning): An elusive attack strategy targeting RAG architectures, requiring proactive measures to prevent exploitation.
Defense: Data delimiting and token stripping can help mitigate RAG poisoning attacks.
5. Obfuscation attacks: Utilizing encoding techniques to bypass keyword filters, posing a challenge for traditional security measures.
Defense: Normalization layers can decode encoded instructions for analysis and detection.
… (Content continues)



