How AI Is Quietly Eroding Your Privacy — and What You Can Do Right Now
Intro
Every time you ask a smart assistant for the weather, let your phone auto-suggest a reply, or upload a photo to a cloud service, an AI model is processing your data. Many people assume this is harmless — just a quick computation, nothing saved. The reality is more complicated. Voice recordings, facial images, and even the context of your conversations can be retained, analyzed, and used to train the very systems you rely on.
This isn’t a paranoid fantasy. Recent discussions, including Heather Parry’s Substack piece on AI’s erosion of privacy, have brought new attention to just how much personal information these tools quietly collect. And with companies like OpenAI, Meta, and Google updating their policies to feed more user data into their models, the issue is timely.
The good news: you can take concrete steps to limit what gets harvested without abandoning convenience entirely.
What happened
Heather Parry’s recent Substack article, AI’s erosion of privacy, highlights how generative AI tools — from chatbots to photo organizers — often blur the line between temporary processing and permanent data storage. The piece is part of a growing chorus of consumer advocates pointing out that many free AI services are funded by the data you provide.
Meanwhile, regulatory changes in 2025 and 2026 have forced some companies to be more transparent. For example, OpenAI now allows users to opt out of having their ChatGPT conversations used for training. Google Assistant and Meta’s smart glasses have introduced clearer data retention policies. Yet the default settings still tilt toward maximum collection, and most people never change them.
Why it matters
When an AI tool retains your voice commands, it doesn’t just keep the words — it may hold a unique vocal signature. Facial recognition in photo apps creates a biometric profile. Even your browsing habits, when fed into recommendation engines, build a detailed portrait of your preferences, health concerns, and political leanings.
This data can be leaked, sold, or subpoenaed. It can also be used to train models that later produce eerily accurate predictions about your life. The convenience of a smart reply or a personalized playlist comes with a quiet trade-off: a permanent digital record of intimate details you might never have chosen to share.
What readers can do
You don’t have to ditch every AI-powered tool. Instead, start with a few manageable adjustments.
Audit your voice assistants. On both iOS and Android, go to the settings for Siri, Google Assistant, or Alexa. Turn off “Improve Siri & Dictation,” delete your voice history, and disable the storage of audio recordings. On Google Assistant, look for “Voice & Audio Activity” in your Google Account settings and pause it.
Opt out of training data. In ChatGPT, navigate to Settings > Data Controls and turn off “Chat history & training.” This prevents your conversations from being used to train future models. For Google’s Gemini, you can disable “Gemini Apps activity” in your Google Account.
Limit photo tagging and facial recognition. Google Photos and Apple Photos both offer face grouping. You can disable this feature in their respective settings. Apple’s feature runs on-device, whereas Google’s historically required cloud processing; check the latest policy.
Use local AI models. For tasks like writing assistance or image generation, consider tools that run entirely on your device rather than sending data to the cloud. Examples include Llama.cpp for text and Apple’s on-device large language model in iOS.
Review browser AI features. Chrome’s “Help me write” and Microsoft Edge’s Copilot often collect page content. Disable these in browser settings. Consider using a privacy-focused browser like Brave or Firefox with strict tracking protection.
Check app permissions. On your phone, review which apps have microphone, camera, and photo access. Revoke any that don’t need it. Many apps quietly enable AI features that scan your Library.
Sources
- Heather Parry, AI’s erosion of privacy, Substack, 2026.
- OpenAI, “Data Controls FAQ,” help.openai.com.
- Google, “Manage your Gemini Apps activity,” myaccount.google.com.
- Apple, “Improve Siri & Dictation” privacy setting, iOS settings.
A full checklist of these steps is available in the original Substack piece, but the key takeaway is this: you can regain a significant degree of privacy by spending 15 minutes adjusting a handful of settings. The trade-off is a modest loss of convenience — but for many, that is a worthwhile bargain.