For many years, artificial intelligence has been seen as an “emerging” technology by security leaders, something to monitor but not yet a top priority. However, a recent Enterprise AI and SaaS Data Security Report by LayerX, an AI & Browser Security company, highlights just how outdated this mindset has become. AI is now the largest uncontrolled channel for corporate data exfiltration, surpassing shadow SaaS and unmanaged file sharing.
The report, based on real-world enterprise browsing telemetry, reveals a surprising truth: the issue with AI in enterprises is not a future concern but a current challenge in everyday workflows. Sensitive data is already being transferred into AI tools like ChatGPT, Claude, and Copilot at alarming rates, mostly through unmanaged accounts and hidden copy/paste channels. Traditional DLP tools, designed for authorized, file-based environments, are not equipped to address this growing problem.
Transitioning from “Emerging” to Essential
In just two years, AI tools have achieved adoption levels that took email and online meetings decades to reach. Nearly half of enterprise employees (45%) are already using generative AI tools, with ChatGPT alone having a 43% penetration rate. AI usage accounts for 11% of all enterprise application activity, comparable to file-sharing and office productivity apps.
The challenge lies in the lack of governance accompanying this rapid growth. A significant portion of AI usage (67%) occurs through unmanaged personal accounts, leaving Chief Information Security Officers (CISOs) unaware of who is using these tools and where the data is being sent.
The Prevalence of Sensitive Data Leakage
An alarming finding from the report is the substantial amount of sensitive data already entering AI platforms. 40% of files uploaded to GenAI tools contain personally identifiable information (PII) or payment card industry (PCI) data, with employees using personal accounts for nearly 40% of these uploads.
Moreover, the primary channel for data leakage is through copy/paste. 77% of employees paste data into AI tools, and 82% of this activity originates from unmanaged accounts. On average, employees perform 14 pastes per day through personal accounts, with at least three containing sensitive data.
Addressing the Governance Gap
The report emphasizes the need for organizations to prioritize AI security and implement robust governance strategies to monitor uploads, prompts, and copy/paste flows. Additionally, a shift from file-centric to action-centric Data Loss Prevention (DLP) is recommended, as data is not only leaving through file uploads but also through file-less methods like copy/paste and chat.
Restricting unmanaged accounts and enforcing federation across all platforms is crucial to regain visibility and control over data flows. High-risk categories such as AI, chat, and file storage require heightened controls due to their high adoption and sensitivity levels.
Conclusion
The report’s findings underscore the critical need for security teams to reevaluate their approach to AI security. Delaying the recognition of AI as an essential enterprise category is no longer viable, as it is already deeply integrated into workflows, carrying sensitive data, and serving as a primary avenue for corporate data loss.
As the enterprise landscape evolves, with AI becoming increasingly prevalent, security leaders must adapt their strategies to secure these technologies effectively. By embracing AI security as a core enterprise category and implementing action-centric DLP policies, organizations can mitigate the risks associated with AI-driven workflows and protect sensitive data from unauthorized access.
For more insights and recommendations on securing AI and SaaS data within the enterprise, refer to the full report from LayerX. It offers a comprehensive analysis of data leakage trends, blind spots, and actionable steps for enhancing data security in the AI era.



