<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Enterprise Data Privacy on BriefArc</title><link>https://briefarc.com/tags/enterprise-data-privacy/</link><description>Recent content in Enterprise Data Privacy on BriefArc</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 27 Apr 2026 15:31:27 +0000</lastBuildDate><atom:link href="https://briefarc.com/tags/enterprise-data-privacy/index.xml" rel="self" type="application/rss+xml"/><item><title>Privacy risks to watch and simple ways to protect yourself</title><link>https://briefarc.com/posts/privacy-risks-to-watch-and-simple-ways-to-protect-yourself/</link><pubDate>Mon, 27 Apr 2026 15:31:27 +0000</pubDate><guid>https://briefarc.com/posts/privacy-risks-to-watch-and-simple-ways-to-protect-yourself/</guid><description>OpenAI quietly rolled out a privacy filter that automatically detects and redacts personally identifiable information (PII) like Social Security numbers and credit cards in ChatGPT chats. This article explains what the filter covers, how to enable it, and its limitations so you can use AI tools with more confidence.</description></item></channel></rss>