OpenAI’s new privacy filter automatically detects your personal data in chats – here’s how to turn it on

If you’ve been using ChatGPT for work or personal tasks, you’ve probably wondered at some point: Am I accidentally sharing my phone number, address, or financial details with the model? That concern is legitimate — incidents of sensitive data appearing in AI conversations have been reported since the tools launched. To address it, OpenAI appears to have quietly rolled out a built-in privacy filter that can detect and redact personally identifiable information (PII) before it leaves your chat.

What follows is a practical guide based on a recent report from QUASA Connect (April 27, 2026) and general knowledge of how similar tools work. I’ll explain what the filter does, how to enable it, and — just as importantly — what it doesn’t protect.

What happened

According to the QUASA Connect article, OpenAI has released a privacy filter that uses natural language processing to automatically identify common types of PII within ChatGPT conversations. When the filter is active, information such as Social Security numbers, credit card numbers, phone numbers, email addresses, and physical addresses is detected and redacted in real time. The filter is embedded directly into the chat interface, meaning you don’t need third‑party tools or manual review.

The feature appears to be available on both free and paid tiers of ChatGPT, though OpenAI has not yet published a formal announcement or a detailed support page. The QUASA Connect piece is the primary source at this point, so some details — such as the exact list of PII types and configuration options — remain unconfirmed by the company itself.

Why it matters

For everyday users and small business owners, the filter addresses a real pain point. Many people use ChatGPT to draft emails, summarize documents, or answer customer inquiries — tasks that can involve real personal data. Without a filter, that data is sent to OpenAI’s servers and stored temporarily (or longer, depending on your settings). Even if you trust the company’s privacy policies, accidental exposure during a shared workspace or a computer screen can happen.

The filter reduces that risk by masking sensitive characters before the message is submitted. For example, if you type “my SSN is 123-45-6789,” the filter may replace the digits with asterisks or a placeholder. That means the sensitive numbers never reach the model’s memory or your chat history.

This is a meaningful step beyond the existing “incognito mode” (which disables chat history) or data deletion tools. Those features stop future training but don’t prevent the data from being processed in the moment. The filter stops the data from being processed at all.

What you can do

Based on the report and how similar controls are typically implemented, here’s a general approach to enabling the filter:

  1. On the ChatGPT web app: Look for “Privacy” or “Data controls” under your account settings (usually via your profile icon). There may be a toggle labeled something like “Auto‑redact personal information” or “PII filter.” Turn it on.

  2. On mobile: The same option should appear in the app’s settings menu, likely under “Account” or “General.”

  3. For API users: The filter may be available as a parameter in API calls (e.g., pii_filter: true), but you’ll need to check OpenAI’s API documentation for the exact implementation.

But be aware of the limitations:

  • The filter is not perfect. It may miss unusual formats, non‑English data, or information provided in images or voice recordings. The filter processes text only.
  • Custom GPTs and third‑party plugins may not inherit this setting. Always check your custom instructions.
  • The report does not clarify whether the filter is opt‑in or opt‑out by default. Until OpenAI clarifies, assume you need to enable it manually.
  • Redaction happens within the chat, but the original data might still be held temporarily by the filter system for processing. Check OpenAI’s privacy policy for retention specifics.

What the filter does not cover (yet)

  • Voice recordings (speech‑to‑text transcripts are not filtered)
  • Uploaded images or PDFs (unless the text is extracted and processed separately)
  • Data shared with third‑party GPTs or connected apps
  • Enterprise accounts may have different controls; check with your admin

Best practices going forward

Even with the filter active, it’s wise to develop careful habits:

  • Do not type real PII unless absolutely necessary. The filter is a safety net, not a guarantee.
  • Review your chat history periodically and delete conversations that contain any sensitive information.
  • Keep an eye on OpenAI’s official documentation for updates — the company may add more PII categories or improve detection accuracy in future releases.

For now, the filter is a solid step forward in consumer AI privacy. It gives you a practical, built‑in layer of protection that wasn’t there before. Turn it on, but don’t let it replace your own caution.

Sources

  • QUASA Connect, “OpenAI Privacy Filter: The Quietly Released PII Guardian That Finally Solves Enterprise Data Leakage” (April 27, 2026). [Link to article]
  • OpenAI official documentation and support pages (referenced for general context; specific filter details still unconfirmed by the company at time of writing).