AI Tools Are Eroding Your Privacy: 5 Steps to Protect Your Data Now

Introduction

Every time you type a prompt into ChatGPT, upload a photo to an image generator, or ask a smart assistant a question, you’re handing over data that can be stored, analyzed, and sometimes reused. It’s easy to focus on the convenience and forget the trade‑off. But a growing number of reports—including a recent piece by Heather Parry on Substack—are drawing attention to how quickly our privacy is being chipped away as AI adoption surges. This article explains what’s happening, why it matters for everyday users, and what concrete steps you can take to reduce your exposure.

What happened

Heather Parry’s Substack article highlights the quiet erosion of privacy that comes with using popular AI tools. While the piece itself focuses on systemic concerns, the underlying issue is straightforward: many AI services collect and retain the inputs you provide—text, images, voice recordings, and even metadata like timestamps and device information—to improve their models. Companies like OpenAI, Anthropic, and Midjourney have publicly stated that they may review user conversations and uploaded content for training purposes, and some allow human reviewers to access that data. Privacy policies often mention these practices, but most users never read them.

The problem isn’t new, but the scale is. With hundreds of millions of people now using AI chatbots and generative tools, the volume of personal data being funneled into corporate servers is unprecedented. And unlike a search query, a conversation with an AI can include intimate details, medical questions, work‑related documents, or photos of your family. Once submitted, you have limited control over where that data ends up or how long it is kept.

Why it matters

For most consumers, the immediate risk isn’t a data breach—although that’s possible—but the slow normalisation of surveillance‑like data collection. Every prompt you send becomes part of a training dataset that could be used in ways you didn’t consent to. Over time, these inputs can be used to build profiles, infer sensitive attributes, or even appear in outputs for other users (there have been instances where ChatGPT accidentally regurgitated personal information from its training data). Additionally, the retention of your data means that a future subpoena, merger, or policy change could expose it in ways you never anticipated.

The practical consequence is that you may be sharing more than you realise. A casual question about a health symptom, an uploaded photo of your home, or a request for advice on a legal issue all become part of a permanent record held by a third‑party company. For people in professions with confidentiality requirements—or anyone who values their digital autonomy—this is a serious concern.

What readers can do

You don’t have to stop using AI tools to protect your privacy. The following steps are practical and can be implemented today.

1. Review and adjust your privacy settings. Most AI services offer some control over data usage. In ChatGPT, for example, you can disable chat history (Settings → Data Controls → Chat history & training). This prevents your conversations from being used to improve the model. Midjourney and other image generators similarly allow you to opt out of training. Take five minutes to check each service you use.

2. Avoid sharing sensitive personal information. Treat an AI chatbot like a stranger on the internet. Don’t enter your real name, address, phone number, financial details, or health information unless absolutely necessary. If you need help with a sensitive topic, consider using a local, offline model like Llama 3.2 running on your own computer through tools like Ollama.

3. Use temporary or disposable accounts when possible. For services that don’t require an account (e.g., Claude’s free tier or Bing Chat), use a browser in private mode. For services that do, consider creating a separate email account and minimizing the personal details you provide.

4. Turn off voice recording and uploading features. Smart assistants and voice‑enabled chatbots often record and store your voice commands. Check your device settings to disable voice history or auto‑saving. On Amazon Echo, you can delete recordings daily via the Alexa app; on Google Assistant, you can turn off “Voice & Audio Activity.”

5. Choose privacy‑friendly alternatives. Several AI tools are designed with privacy as a priority. For chatbots, you can try Brave’s Leo (which discards conversations) or DuckDuckGo’s AI Chat (which anonymizes requests). For image generation, tools like Stable Diffusion can be run locally on your own hardware with no data leaving your machine. If you must use a cloud service, look for one that explicitly states they do not use your data for training (e.g., some enterprise editions).

Sources

  • Heather Parry, “AI’s erosion of privacy,” Substack (April 2026).
  • OpenAI, “Privacy Policy” and “Data Controls FAQ,” 2025.
  • Anthropic, “Claude Privacy & Data Handling,” 2025.
  • Midjourney, “Terms of Service” (Section on Data Use).
  • The Verge, “ChatGPT chat history bug exposed user conversations,” 2023.
  • Mozilla Foundation, “Privacy Not Included: AI Chatbots,” 2025.