Seven steps to AI supply chain visibility — before a breach forces the issue

According to a report from Gartner, four out of 10 enterprise applications will incorporate task-specific AI agents this year. However, only 6% of organizations have an advanced AI security strategy in place, as revealed by research from Stanford University’s 2025 Index Report.

Palo Alto Networks predicts that in 2026, executives may face major lawsuits holding them personally accountable for rogue AI actions. With the escalating and unpredictable nature of AI threats, many organizations are struggling to contain these risks. Quick fixes like increased budgets or more staff are not sufficient to address the governance challenges posed by AI.

There is a significant visibility gap concerning the usage and modification of AI models, with one CISO describing model SBOMs as the “Wild West” of governance. Without proper visibility, AI security becomes a guessing game, making incident response extremely challenging.

The U.S. government has been mandating SBOMs for all software acquisitions in recent years, but AI models require even more attention in this area. The lack of improvement in ensuring visibility and governance of AI models poses a substantial risk to organizations.

The importance of AI visibility

A recent survey conducted by Harness revealed that 62% of security practitioners have no insight into the usage of LLMs across their organizations. There is a pressing need for more transparency and rigor at the SBOM level to enhance model traceability, data integration, and usage patterns.

Enterprises are facing increasing levels of prompt injection, vulnerable LLM code, and jailbreaking, all of which are exploited by adversaries to compromise AI models. Traditional cybersecurity software is often unable to detect these intrusion attempts, as they are masked by sophisticated techniques not easily traceable by legacy systems.

IBM’s 2025 Cost of a Data Breach Report found that 13% of organizations experienced breaches of AI models or applications last year, with 97% lacking proper AI access controls. Shadow AI incidents, in particular, cost organizations significantly more than traditional breaches, highlighting the importance of addressing AI security vulnerabilities.

The challenges with SBOMs

While Executive Order 14028 and NIST’s AI Risk Management Framework call for AI-specific BOMs, traditional software SBOMs are not equipped to capture the unique risks associated with AI models. Model dependencies are dynamic and constantly evolving, presenting challenges in tracking model versions and ensuring security.

AI models saved in pickle format pose inherent security risks, as they execute code on load. SafeTensors offer a safer alternative by storing only numerical tensor data without executable code. However, migrating to SafeTensors requires effort and may result in the loss of access to legacy models.

Despite the availability of standards like CycloneDX and SPDX for ML-BOMs, adoption remains low. Organizations need to prioritize the governance of AI models to mitigate security risks and ensure compliance.

A recent survey found that many security professionals acknowledge falling behind on SBOM requirements, with ML-BOM adoption lagging even further.

Key takeaway: While the necessary tools for AI governance exist, organizations need to prioritize operational urgency to secure their AI supply chains.

Enhancing AI supply chain visibility

Preparing for AI supply chain incidents requires organizations to build a model inventory and define processes to keep it current. It is essential to proactively manage and redirect shadow AI use to secure platforms, mandate human approval for production models, and consider implementing SafeTensors for new deployments.

Piloting ML-BOMs for high-risk models, treating every model pull as a supply chain decision, and adding AI governance to vendor contracts can enhance AI supply chain visibility. While AI-BOMs serve as valuable compliance and visibility tools, they are not a substitute for runtime security measures.

The evolving threat landscape

The software supply chain is witnessing a surge in new AI models, with a significant increase in malicious models posing security threats. Organizations need to be vigilant in monitoring and securing their AI supply chains, as the attack surface continues to expand.

Addressing security challenges in AI models requires a strategic approach that includes building visibility, implementing advanced security measures, and fostering collaboration between security teams and vendors. By prioritizing AI supply chain security, organizations can mitigate risks and safeguard their AI investments.

Conclusion: As AI governance becomes a boardroom priority, organizations must take proactive measures to secure their AI models and supply chains. Compliance with regulations and standards, along with a focus on transparency and visibility, will be critical in addressing the evolving security threats in the AI landscape.