50% of workers use unapproved AI tools, according to a study on the rise of shadow AI.
An October 2024 study by Software AG found that half of all employees are using Shadow AI tools at work, and most would continue even if it were banned.

An October 2024 study by Software AG found that half of all employees are using Shadow AI tools at work, and most would continue even if it were banned.
The issue arises because AI tools are easy to access, and many workplaces promote using AI to work more efficiently. As a result, employees look for their own AI tools to boost their personal productivity and career advancement.
Michael Marriott from Harmonic Security says it feels natural for many workers to use AI in their jobs. Tasks like taking meeting notes, writing emails, checking code, or creating content become faster with AI. If official company tools are difficult to access or overly restricted, employees resort to available alternatives, often found quickly online.
Most of the time, workers aren't trying to break rules; they simply want to improve their performance. Since using unapproved AI might not be allowed, many don’t inform their bosses. This lack of communication leaves companies unaware of how much Shadow AI is being used or what risks it might bring.
Harmonic examined 176,460 AI interactions from 8,000 users at client companies during early 2024. This study only reflects browser use and not all Shadow AI activity, but offers insight into employees' usage patterns.
ChatGPT is the most popular AI model among workers. About 45% of data requests go through personal accounts like Gmail. The analysis points out that employees prioritize convenience over strict adherence to company policies. Image files make up the majority of uploads to ChatGPT, accounting for 68.3%.
The study’s main focus is on the risks associated with Shadow AI usage, not just identifying the tools used.
For instance, more employees are using AI models from China, such as DeepSeek and Baidu Chat. Seven percent of workers have adopted these tools. It's important to realize that data given to Chinese AI could potentially be accessed by the Chinese government for its own purposes.
From late 2024 to early 2025, the frequency of sensitive data usage slightly decreased from 8.5% to 6.7%. However, the types of data considered risky have shifted. Risks involving customer data have dropped (from 45.8% to 27.8%), as well as employee information (from 26.8% to 14.3%), and security-related risks also reduced (from 6.9% to 2.1%). Conversely, risks linked to legal and financial data have increased (from 14.9% to 30.8%), along with risks concerning sensitive code (from 5.6% to 10.1%). Tracking for Personal Identification Information (PII) began in early 2025, showing a rate of 14.9%.
Most data is sent to ChatGPT (79.1%), with 21% going to ChatGPT’s free version, where data can be stored and used for AI training. Following ChatGPT in popularity are other tools like Google Gemini and Perplexity.
Harmonic’s analysis for early 2025 indicates that companies should not just observe Shadow AI usage passively; they need to take active steps to manage it. The goal is not to stifle employees' creativity with AI but to make sure it is used securely and wisely, through proper training and guidance.
"This is not a minor issue," says Marriott. "It’s common and growing. It’s taking place in nearly every company, regardless of whether there’s an official AI policy."