How AI Tools Are Eroding Your Privacy — and What You Can Do About It
Intro
More apps and devices now include AI features that make life easier—voice assistants, auto-generated replies, smart photo organization, chatbot help. But each of these conveniences comes with a hidden trade-off: your personal data is being collected, analyzed, and often shared in ways that are not transparent. A recent piece on Heather Parry’s Substack highlighted how quickly this erosion of privacy is happening, and it’s a conversation every internet user should pay attention to. This article walks through what’s actually going on, why it matters, and how you can take back some control.
What happened
AI systems require large amounts of data to function. When you use a voice assistant like Siri or Alexa, your speech is recorded and sent to a server. When you chat with a customer service bot, the entire conversation is stored. When you use a “smart” photo app, it scans your images for faces, locations, and objects. Many companies also use your behavior—what you click, how long you pause, which settings you change—to train their AI models.
In 2023 and 2024, several reports revealed that employees at AI companies were reviewing user conversations, sometimes without explicit consent. Services like ChatGPT, Google Bard, and Microsoft Copilot record your prompts and the context around them. Even apps that seem offline may send data to the cloud. The Heather Parry Substack article (published April 2026) notes that as AI becomes embedded in more everyday tools, the line between helpful and invasive blurs further. What’s new is not that data is being collected—it’s the scale and the use of that data to build detailed profiles of users.
Why it matters
Privacy risks from AI are not abstract. Unexpected sharing occurs when your data is passed to third-party analytics, advertising partners, or even used to train a model that is later released publicly. Profiling is another concern: your conversations, voice patterns, and preferences can be used to infer sensitive details like political views, health conditions, or financial situation. Data leaks are also a real possibility; cloud-stored AI training data has been breached before.
Moreover, once your information is in a training set, it’s almost impossible to remove. Opting out after the fact is rarely effective. And many privacy policies are written in such vague terms that users do not realize the extent of data collection until after the fact. This matters because the choices you make today about which AI tools you allow into your life can have lasting consequences for your digital footprint.
What readers can do
You don’t need to avoid AI entirely to protect your privacy. Here are concrete steps you can take right now:
Disable AI training in your settings. Most AI services allow you to opt out of having your data used for model training. In ChatGPT, go to Settings → Data Controls and turn off “Improve the model for everyone.” In Google Assistant, you can delete voice recordings manually and turn off “Voice & Audio Activity.” For Microsoft Copilot, check your Microsoft privacy dashboard and disable “Optional diagnostic data.” Do this for every service you use.
Use local AI when possible. Some AI tools run entirely on your device and do not send data to the cloud. For example, Apple’s on-device Siri processing (available on recent iPhones), or local models like Llama or Mistral that you can run on your computer. For text processing, consider using offline OCR or keyboard apps that don’t require an internet connection.
Check privacy policies before signing up. Before you install a new app or enable an AI feature, skim the privacy policy for phrases like “train our models,” “share with third parties,” or “retain your data indefinitely.” If the policy is unclear or too broad, consider alternatives. Websites like Terms of Service; Didn’t Read (tosdr.org) provide simplified ratings.
Compare AI tools by privacy friendliness. Not all AI services are equal. For chat, tools like DuckDuckGo’s ChatGPT (via their AI chat) anonymize requests. For writing assistance, consider using an offline tool like LocalGPT. For photo organization, look for apps that process images on-device rather than in the cloud. A quick search for “privacy-focused AI” will surface options that are often good enough for everyday use.
Limit permissions. On your phone, go to app permissions and revoke microphone, camera, and file access for AI apps unless they absolutely need it. Use a browser extension like Privacy Badger to block tracking scripts from AI-powered widgets.
Sources
- Heather Parry, “AI’s erosion of privacy,” Substack (April 2026). Link to article
- Various official privacy settings documentation from OpenAI, Google, Apple, and Microsoft (accessed 2026).