Protect Your Privacy in an AI World: A Practical Guide

The convenience of AI tools comes with a growing privacy cost. Every query you type into ChatGPT, Google Bard, or similar services generates data that companies can use for training, analytics, and sometimes sharing with third parties. A recent piece by Heather Parry on Substack lays out how this erosion of privacy is accelerating as AI becomes more embedded in daily life. Whether you use AI for work, study, or casual exploration, it’s worth understanding what’s happening and how to take back some control.

What happened

AI tools collect data in several ways. First, the prompts and responses you exchange are stored by the provider. Companies like OpenAI and Google have disclosed that they retain chat logs and associated metadata (IP addresses, timestamps, device info). This data is often used to improve models—unless you explicitly opt out. In some cases, companies have faced scrutiny after employees or contractors reviewed private conversations for safety or quality checks, raising questions about who has access.

A few high-profile incidents have made these risks tangible. In 2023, a data breach at ChatGPT exposed payment information and chat titles. More recently, reports emerged that some AI chatbots inadvertently revealed sensitive data from other users due to design flaws. While companies patch these issues, the underlying habit of hoarding vast amounts of personal input remains.

Why it matters

When you use an AI assistant, you’re handing over more than just a question. A medical inquiry, a personal story, a draft of a sensitive email—all become part of a corporate data set. Unlike a search engine, many AI tools are designed to maintain long conversations and retain context across sessions. That context can be mined for behavioral profiles, used to train future models, or accidentally exposed if security lapses occur.

Even if you trust the company today, the data trail doesn’t disappear when policies change. Acquisitions, leadership shifts, or new legal regimes could repurpose that information in ways you didn’t agree to. The erosion is slow but real: you begin to share more than you intended because the tool makes it so easy.

What readers can do

Here are practical steps to limit exposure without abandoning AI tools entirely.

Adjust privacy settings now. Most major AI platforms offer a way to prevent your conversations from being used for model training. In ChatGPT, go to Settings > Data Controls and turn off “Chat history & training.” In Google Bard (now Gemini), you can disable “Improve Gemini Apps by using your conversations.” These settings do not delete past conversations, but they stop future ones from feeding the training pipeline. Do this for every account you use.

Limit what you share in prompts. Treat each AI session as if it could be read later. Avoid entering passwords, full names, addresses, health details, or financial information. Use placeholders or vague descriptions when possible. If a tool asks for personal context to personalize responses, consider whether you truly need that feature.

Delete conversation history regularly. Most services let you clear your history manually. ChatGPT and Gemini both have a “Clear conversations” option. Some also support auto-delete time windows (e.g., 3 months). Set a reminder to do this every month.

Review privacy policies critically. Look for what data is collected, how long it’s stored, whether it’s shared with third parties, and what happens if you delete your account. Companies have different definitions of “anonymized”—some still link data to a user identifier. Don’t assume “anonymized” means private.

Consider alternative tools. If you handle sensitive data frequently, look into local-first AI assistants that run on your own device, such as Ollama or GPT4All. These models do not send your data to a remote server. The trade-off is they may be slower or less capable, but for tasks like drafting or note summarization they work well.

Use a separate email or account for AI services. Keep your primary login separate from your AI accounts. This reduces the cross‑linking of data between services like Google Workspace and Gemini.

Stay informed. Privacy practices change. Subscribe to updates from credible sources—the Electronic Frontier Foundation, Mozilla’s Privacy Not Included, or writers like Heather Parry who track the privacy implications of new technology. The landscape shifts fast, and what’s safe today may not be next month.

Sources

  • Heather Parry, “AI’s erosion of privacy,” Substack (2026).
  • OpenAI, “Data controls for ChatGPT,” help page (accessed 2025).
  • Google, “Gemini Apps Privacy,” support page (accessed 2025).
  • Electronic Frontier Foundation, “AI and Privacy,” ongoing coverage.

The goal isn’t to stop using AI—it’s to use it with open eyes and a few simple habits that keep your personal information yours. The settings exist. The knowledge is available. Taking ten minutes to configure your accounts will make a real difference.