Shadow AI Is Everywhere: How to Stay Safe When Using AI at Work
If you haven’t seen a coworker paste a confidential spreadsheet into a free chatbot, you probably will soon. Unapproved AI tools are quietly spreading through offices, often without a word to IT. This phenomenon, sometimes called shadow AI, is the new shadow IT — and boards are already behind, according to a recent story from CX Today. The article notes that executives are only now beginning to mandate governance for AI tool usage, while employees have been experimenting for months.
What Happened
Shadow AI refers to the use of artificial intelligence tools inside a company without official approval or oversight. It mirrors the older problem of shadow IT, where employees adopted cloud apps like Dropbox or Slack without telling their IT department. But AI amplifies the risk because the tools are free, easy to start using, and often require only a web browser and an email address.
The scale is hard to ignore. A 2025 Gartner survey found that 60% of employees use AI tools without IT approval. Many of these workers are not being careless — they are trying to be productive. They might use a large language model to draft an email, summarize a meeting transcript, or clean up a dataset. The problem is that the data they enter into these free tools may be used for training, leading to potential data breaches or leaks of proprietary information.
Why It Matters
The privacy and security risks are real. When an employee pastes client contact lists, internal financials, or product roadmaps into a public AI service, that data can become part of the model’s training set. Even if the tool promises not to save inputs, there have been cases where data surfaced in later outputs for other users. Regulatory compliance also suffers: medical or legal information entered into a non-compliant tool can trigger violations of HIPAA, GDPR, or other rules.
The most surprising risk may be reputational. A few well-known examples have already made headlines: Samsung employees accidentally leaked source code and meeting notes through ChatGPT in 2023. Similar incidents are likely happening behind closed doors at many companies, unreported.
What You Can Do
Whether you are a remote worker, a team lead, or an IT professional, there are practical steps you can take.
Spot the Shadow AI in Your Organization
- Look for unusual browser tabs or bookmarks for free AI tools.
- Notice coworkers asking for help accessing a service that IT hasn’t officially approved.
- Check if your company has a clear policy on AI usage. If it doesn’t, that’s a strong sign shadow AI is active.
Evaluate AI Tools Before You Use Them
- Ask: Does the tool keep your data private? Look for terms of service that say “we do not train on your inputs.”
- Check whether the tool offers enterprise-level controls like data deletion, access logs, and compliance certifications.
- If in doubt, ask your IT department for their recommended list. Many companies now have approved AI tools that meet security requirements.
Advocate for a Clear Company Policy
- Suggest that your team or department draft a short list of acceptable uses for AI. Simple rules can go a long way.
- Propose a policy that allows experimentation in sandboxed environments, where sensitive data is never entered.
- Encourage IT to publish a single page of guidance rather than a dense document no one reads.
Balancing innovation with security is possible. The goal isn’t to ban AI tools — that approach usually just drives them further underground. Instead, create a culture where employees feel safe reporting what they use and where IT can offer safer alternatives.
Sources
- “Shadow AI Is the New Shadow IT – And Boards Are Already Behind,” CX Today, May 2026. (Referenced news story)
- Gartner survey, 2025: 60% of employees use AI tools without IT approval (as cited in the CX Today article and other industry reports).