Home

Over a Quarter (26%) of Uploads to GenAI Tools Contain Sensitive Data – an Increase of Over 4% in Just Three Months

The average organization must also grapple with the challenge of its employees using 27 distinct AI tools with 12% of data exposures coming from personal accounts

Organizations are leaking data at an accelerating rate according to new analysis by Harmonic Security with 26.4% of all file uploads to GenAI tools containing sensitive data - a rise from the 22% it tracked in Q2. The study was conducted on over three million prompts and file uploads across 300 generative and AI-embedded tools spanning organizations in the United States and the United Kingdom between July and September 2025.

Over half (57%) of this sensitive data is business or legal – much of it highly confidential with 35% of this number involving contract or policy drafting. A further 35% of this subset consists of M&A and financial forecasting. A quarter (25%) of all sensitive disclosure is technical data, 65% of which consists of proprietary source code copied into GenAI tools for debugging or refactoring. The remainder is credential or key leaks and security incident reports used to summarize post-mortems. Finally, 15% of the total involved personal or employee data including everyday identifiers such as names and addresses but also employee information such as HR records, and payroll details.

A key trend finds that employees are blending personal and corporate AI usage. Some 12% of all sensitive data exposures comes from personal accounts including free versions of ChatGPT, Gemini, Claude, and Meta AI. These accounts often retain history and context, meaning sensitive business information can persist in personal workspaces indefinitely regardless of whether employees still work for the company.

Adding to this challenge, the average organization has to grapple with employees using a total of 27 distinct AI tools in Q3. However, there is an indication that this may become easier to manage with the number of new GenAI tools introduced by employees falling from 23 in Q2 to 11 in Q3. This could indicate that AI adoption is maturing and point to employees integrating AI into core workflows, not just testing or experimenting with multiple tools. Another statistic from the research could support this further – the average enterprise uploaded more than three times as much data to generative AI platforms as in the previous quarter 4.4GB vs 1.32GB in Q2.

Alastair Paterson, CEO and co-founder at Harmonic Security comments: “The challenge has shifted from adoption to control: managing the flow of sensitive information through an ecosystem that blurs the line between company and individual. BYOAI is still a big issue; 12% use of personal accounts is still too high and could proliferate further as newly-introduced AI-native browsers take hold. Therefore, governance must occur where work happens with browser-level controls enabling organizations to apply policy at the point of data loss, not retroactively.”

This report draws on anonymized enterprise data collected through Harmonic Protect, which tracks all GenAI browser use, between July and September 2025. The data reflects real-world employee activity within security-conscious enterprises using Harmonic’s monitoring solutions. No personally identifiable information or proprietary file contents left customer environments. All data was aggregated and sanitized before analysis.

About Harmonic

As every employee adopts AI in their work, organizations need control and visibility. Harmonic delivers AI Governance and Control (AIGC), the intelligent control layer that secures and enables the AI-First workforce. By understanding user intent and data context in real time, Harmonic gives security leaders all they need to help their companies innovate at pace.

For more information, visit https://www.harmonic.security/

Contacts