5 ways AI is quietly eroding your privacy (and how to fight back)

Every time you ask a chatbot a question, upload a photo to a face‑tagging app, or let an AI assistant scan your inbox, you’re handing over a piece of your private life. The trade‑off feels invisible—convenience in exchange for data you never see again. But as AI tools become embedded in everyday software, the privacy cost is getting harder to ignore.

A recent piece by Heather Parry on Substack put this unease into words, capturing what many of us sense: the tools we rely on are learning more about us than we realise, often without clear consent. The good news is that you don’t have to stop using AI. You just need to understand where the risks are and take a few deliberate steps to limit them.

What happened: the quiet data grab

The concern isn’t new, but it’s escalating. AI models are trained on vast amounts of data, and much of that data comes from real users. When you use a free chatbot or an AI‑powered photo editor, your inputs—your questions, images, voice recordings—may be stored, analysed, and used to improve the model. In many cases, that data can be reviewed by human trainers or shared with third‑party services.

Heather Parry’s Substack article highlighted how these practices often happen behind vague privacy policies. Companies like OpenAI, Google, and Meta have updated their terms in ways that allow wider data usage, but the average user rarely reads the fine print. The result: a slow, systematic erosion of privacy that most people only notice after something goes wrong—like a breach or an embarrassing data leak.

Why it matters to you

Once your data enters an AI system, you lose control over it. Even if you delete your account, the model may retain what it learned from your inputs. That can include sensitive details: health concerns, financial information, private conversations. And because AI systems can infer things you never directly stated—your approximate location, your political leanings, your emotional state—the profile built about you can be more revealing than you’d ever expect.

Beyond individual risk, there’s a broader societal cost. Widespread data collection fuels surveillance‑style advertising, predictive policing tools, and insurance scoring. The more data we hand over, the harder it becomes to opt out later.

What readers can do: five practical steps

You can reduce your exposure without sacrificing useful AI tools. Here are actions you can take today.

1. Review and limit app permissions

Check what data each AI app on your phone or browser can access. On iOS and Android, go to Settings > Apps and look for permissions like microphone, camera, contacts, and storage. Deny anything that isn’t essential for the app’s core function. A text chatbot doesn’t need your location.

2. Use privacy‑focused alternatives

Not all AI tools are equally data‑hungry. Consider running local models on your own device (e.g., Llama or Mistral via tools like Ollama) instead of sending data to a cloud server. For cloud services, look for providers that offer end‑to‑end encryption or that commit to not training on your data. Proton, for example, has begun adding AI features with stronger privacy guarantees.

3. Turn off chat history and training opt‑outs

Most major AI platforms now let you disable chat history or opt out of having your conversations used for training. In ChatGPT, go to Settings > Data Controls and turn off “Improve the model for everyone.” On Google’s Gemini, you can pause the “Web & App Activity” setting that feeds data back into training. Do this for every service you use regularly.

4. Never share sensitive personal information with AI chatbots

Assume that anything you type into a chatbot could be stored, reviewed, or accidentally exposed. Avoid giving out full names, addresses, phone numbers, medical details, or financial account numbers. If you need AI help with a sensitive topic, use a local model or a service that explicitly promises not to save inputs.

5. Regularly audit connected accounts and delete old data

Many AI tools link to your Google, Microsoft, or social media accounts. Review those connections in your account settings and remove any you no longer need. Also check the data download or export options—most services allow you to request a copy of your data and then delete it. Set a reminder every few months to clear out old chat logs, uploads, and account histories.

Staying informed

Privacy practices change quickly. What’s safe today may not be tomorrow. Follow writers like Heather Parry who track these shifts, and make it a habit to glance at a service’s privacy policy when you sign up—or better yet, before you sign up. The goal isn’t to avoid AI entirely; it’s to use it on your terms.

Sources

  • “AI’s erosion of privacy” – Heather Parry, Substack (April 2026)
  • OpenAI Privacy Policy (updated 2025)
  • Google Gemini Privacy Hub (2025)
  • Meta AI Data Practices (2025)