Stop Letting ChatGPT and Other AI Chatbots Train on Your Data – Here’s How

Every time you ask an AI chatbot a question, you may be giving away more than you intend. Many popular services, including ChatGPT, Google Gemini, and Microsoft Copilot, use your conversations to train and improve their language models by default. That means your personal questions, work documents, or even casual chats can become part of the system’s training data.

If that makes you uneasy, you’re not alone. A recent article from Inc. highlighted the issue and provided a clear guide to opting out. As of May 2026, all three major platforms offer settings that let you stop your data from being used for training. Here’s what you need to know and how to change those settings.

What Happened

The Inc. article, “Stop Letting ChatGPT and Other AI Chatbots Train on Your Data. Here’s Why—and How” (2026), brought attention to the fact that most users are unaware their conversations are being logged and analyzed for model improvement. The issue has been covered by Stanford HAI, the Transparency Coalition, and other outlets, emphasizing that the practice is widespread but often optional.

Earlier coverage, such as a Washington Post column in December 2025, also noted that ChatGPT’s year-end review feature could expose the depth of data collected. The message has been consistent: if you haven’t changed your settings, your chats are likely being used for training.

Why It Matters

The risk is not just theoretical. Conversations you have with a chatbot can include private information—medical symptoms, financial details, confidential work projects, or personal opinions. Although companies claim to anonymize data, there have been incidents where training data was exposed or misused. For instance, a bug in ChatGPT’s open-source library once leaked chat histories. No system is foolproof.

Moreover, the data you share may be used to train models that later generate content resembling your private inputs. Once trained, it is nearly impossible to remove your contribution from a model. The safest approach is to prevent your data from being collected in the first place.

What Readers Can Do

Fortunately, opting out is straightforward. Here are the steps for the three most commonly used AI chatbots.

ChatGPT (OpenAI)

  1. Click on your profile icon (top right) and select “Settings.”
  2. Navigate to the “Data Controls” tab.
  3. Toggle off “Improve the model for everyone.” This stops future conversations from being used for training. It also prevents your chat history from being reviewed by human trainers.
  4. You can also delete past conversations from the same menu under “History and Training.”

Note: This setting applies to both free and ChatGPT Plus accounts, though availability may vary by region. OpenAI’s enterprise API users have different controls.

Google Gemini

  1. Go to your Google Account settings (myaccount.google.com).
  2. Under “Data & privacy,” find “Your data & privacy options” and click on “Gemini Apps Activity.”
  3. Turn off “Gemini Apps Activity” (or in newer interfaces, you may see a toggle labeled “Improve our services with your conversations”).
  4. You can also delete existing activity from the same page.

Be aware that disabling this may also affect some personalization features, such as remembering your preferences across sessions.

Microsoft Copilot (Personal Accounts)

  1. Open Copilot (either on the web or in Windows).
  2. Click on the settings gear icon (top right).
  3. Under “Data & Privacy,” find “Improve Copilot with your data” and turn it off.
  4. For users of Microsoft 365 Copilot (enterprise), your organization’s policies may differ. Check your admin settings.

Other Chatbots (Claude, Perplexity, etc.) Each service has its own privacy controls. Generally, look for a “Privacy” or “Data” section in your account settings. Claude by Anthropic, for example, offers an opt-out for training on your conversations. Perplexity allows you to disable “Use my data to improve Perplexity” in settings. Always check the privacy policy for the latest options.

Extra Steps for Better Privacy

  • Use “temporary chat” modes when available. ChatGPT, for example, offers a “Temporary Chat” feature for one-off conversations that are not saved.
  • Avoid sharing sensitive information even after opting out. No setting is absolute, and model training may still occur in aggregated or anonymized form.
  • Delete your chat history regularly. Most platforms allow bulk deletion.

Sources

  • Inc. (2026). “Stop Letting ChatGPT and Other AI Chatbots Train on Your Data. Here’s Why—and How.”
  • Stanford HAI (2025). “Be Careful What You Tell Your AI Chatbot.”
  • Transparency Coalition (2026). “TCAI Guide: How to stop your images and data from being used to train AI.”
  • The Washington Post (2025). “ChatGPT’s year-end review knows way too much. How to fix your privacy settings.”

This article is based on publicly available information as of May 2026. Settings may change; always verify within the service you use.