Your AI Assistant Is Watching: How to Protect Your Privacy
If you use a voice assistant like Google Assistant or Siri, or chat with ChatGPT or an image generator like Midjourney, you’re giving those companies a steady stream of personal data. Most of the time it’s not obvious how much they keep, how long they store it, or what they do with it after you’ve finished your session. This article explains the main privacy risks in today’s popular AI tools and gives you concrete steps you can take to reduce your exposure without quitting AI altogether.
What Happened
Recent reporting and policy changes have made it clearer how deeply AI tools collect data. OpenAI, the company behind ChatGPT, states in its privacy policy that conversations can be used to improve and train its models unless users actively opt out. Google stores voice recordings from Google Assistant and other interactions through its Voice & Audio Activity setting—users can delete that history, but it’s not deleted by default. Midjourney retains the image prompts and generated images you upload, and these can be viewed by others unless you change your privacy settings.
The underlying pattern is the same: AI services are primarily data businesses. Your inputs—text, voice, images, location data, even how you interact with the interface—are logged, analyzed, and often used to refine the models. This is not necessarily malicious, but it creates privacy risks that many consumers don’t fully consider when they first start using these tools.
A recent article on Substack by Heather Parry titled “AI’s erosion of privacy” captured the growing unease among privacy-conscious users and researchers. The piece highlighted how the convenience of AI often comes with invisible costs to personal data integrity.
Why It Matters
There are several concrete reasons to care about this data collection:
- Data breaches. AI companies hold massive databases of user conversations and images. If those databases are compromised, your private conversations, photos, and personal information could leak.
- Model poisoning and unintended exposure. Because inputs are used to train models, your private data might inadvertently appear in someone else’s output. There have been cases where sensitive information leaked through chatbot responses.
- Lack of control. Most AI tools do not give you fine-grained control over which data is retained and for how long. The default settings usually favor the company, not you.
These risks are not theoretical. In the past year, several AI companies have faced scrutiny from regulators and journalists about their data practices. The trend is clear: if you use these tools, you are surrendering more information than you probably realize.
What Readers Can Do
You don’t have to stop using AI to protect your privacy. Here are practical steps you can take, organized from easiest to most involved.
1. Adjust Your Settings
- ChatGPT (OpenAI): Turn off chat history in the Settings menu. This prevents your conversations from being used for training. You can also export and delete your data.
- Google Assistant: Go to your Google Account, find “Voice & Audio Activity,” and turn it off or delete past recordings. You can also set recordings to auto-delete after 3 or 18 months.
- Midjourney: Use the “Stealth Mode” if you’re on a paid plan, or avoid uploading images you’d rather not have stored. Note that free users’ generations are public by default.
- Siri and Alexa: Both allow you to delete voice recordings in the respective apps. On iPhone, you can disable Siri from listening always (Settings > Siri & Search > Listen for “Hey Siri” off).
2. Use Incognito or Private Modes When Available
Some AI tools now offer temporary sessions that do not store your data. For example, ChatGPT’s incognito mode (sometimes called “Temporary Chat”) avoids saving the conversation to your history. Use these modes for sensitive queries.
3. Choose Local or Privacy-Focused Alternatives
The most effective way to limit data collection is to run AI models on your own device. Open-source models like Llama 2 (from Meta) can be run entirely offline using tools like Ollama or LM Studio. This means your data never leaves your computer. Other privacy-respecting options include browser-based AI that processes locally (e.g., some newer web AI features in Chrome or Edge, though check their data policies).
For image generation, consider Stable Diffusion, which you can run locally if you have a decent graphics card. There are also hosted services that claim not to store your prompts, but you have to read their privacy policies carefully.
4. Read Privacy Policies (Yes, Really)
This sounds tedious, but you don’t need to read every section. Search for terms like “collect,” “store,” “use for training,” “third party,” and “retention.” If a policy is vague about how long data is kept or whether you can delete it, that’s a red flag.
Sources
- OpenAI Privacy Policy (current version). Describes data use for training unless users opt out via settings.
- Google Support pages on Voice & Audio Activity and data deletion options.
- Midjourney Documentation on privacy and image visibility settings.
- Heather Parry, “AI’s erosion of privacy,” Substack, April 2026.
- Meta’s Llama 2 documentation outlining offline and on-device capabilities.
None of this means you should panic or abandon AI. But it does mean you should treat these tools with the same caution you use for any other service that asks for your data. A few minutes changing settings or choosing a local alternative can make a real difference to your digital privacy.