Generative AI traffic surged 890% in 2024, marking its shift from novelty to necessity. Your teams now juggle an average of 66 GenAI apps, with six classified as high-risk.

Writing assistants dominate the landscape at 34% of all GenAI transactions. Grammarly alone accounts for 39% of AI activity, highlighting how deeply these tools embed themselves in daily workflows.

The rise of shadow AI
Employees adopt GenAI tools without IT approval, creating dangerous blind spots. Each unsanctioned app becomes a potential breach point where sensitive data leaks into unknown systems.

Shadow AI multiplies your risk exponentially. Traditional security tools can’t track these unauthorised applications, leaving your intellectual property exposed to third-party platforms.

Data loss incidents double
GenAI-related data loss prevention incidents increased 2.5 times in early 2025. These incidents now comprise 14% of all DLP alerts across enterprise environments.
Free tools like TinyWow show the highest block rates at 36%. Their unrestricted accessibility attracts users, but they lack the security controls your organisation needs.

Your industry determines your exposure
Mining companies face the highest risk, with ten high-risk apps on average. Insurance and professional services follow closely with 8.5 and 8.3, respectively.
Technology and manufacturing sectors lean heavily on AI coding tools. These industries account for 39% of all coding assistant transactions, accelerating development but amplifying security concerns.

The writing assistant vulnerability
Research reveals that over 70% of tested writing assistants fall victim to jailbreaking attacks. Compromised tools generate harmful content, from self-harm instructions to weapons manufacturing details.

Your employees paste confidential information into these platforms daily. Without proper safeguards, trade secrets and customer data flow directly to third-party servers.

Building your defence strategy
Palo Alto Networks’ AI Access Security provides real-time visibility into generative AI adoption patterns. The platform enforces access controls and prevents sensitive data from reaching unauthorised applications.

Zero-trust architectures become essential as AI agents gain autonomy. In part two we explore how a modern security frameworks can protect against emerging AI threats.