How AI Is Quietly Eroding Your Privacy — and What You Can Do About It

Artificial intelligence now powers everything from your phone’s voice assistant to the movie recommendations on your streaming service. The convenience is undeniable. But each interaction leaves a digital trail that companies collect, analyze, and often share in ways most users don’t fully understand. Recent reporting by Heather Parry on Substack examines how this quiet erosion of privacy has become a structural feature of modern AI systems.

This article breaks down the main ways AI tools harvest your data and offers concrete steps you can take to limit exposure.

What Happened

Over the past decade, the data collection apparatus behind AI has expanded rapidly. Voice assistants like Amazon’s Alexa and Google Assistant record snippets of conversation, even when they are not deliberately activated. Smart home cameras, such as those from Ring or Nest, stream video to cloud servers where machine learning models analyse movement and behaviour. Chatbots like ChatGPT store your prompts and responses, using them to improve their models — and, in some cases, retaining them for years.

Consider a few well-documented cases:

  • Clearview AI scraped billions of facial images from public websites without people’s consent and sold the database to law enforcement agencies. The company faced multiple lawsuits and regulatory fines in Europe and Australia.
  • OpenAI’s ChatGPT has been the subject of privacy complaints in Germany and other European countries over how it handles user data, including difficulties in requesting deletion of personal information.
  • Smart speakers from Amazon and Google have been found to send audio recordings to third-party contractors for manual review, even when the wake word was not detected.

Heather Parry’s Substack article “AI’s erosion of privacy” highlights how these trends are not isolated incidents but part of a systemic shift: the business model of many AI services depends on harvesting data at a scale that undermines the anonymity users once expected.

Why It Matters

The erosion of privacy is not abstract. When AI systems track your voice, face, search history, and even the way you type, they build a profile that can be used for targeted advertising, credit scoring, insurance risk assessment, or surveillance. Because data is often shared across platforms, a single service can link your online activity with your offline behaviour.

The problem is compounded by a lack of transparency. Few companies disclose exactly what data their AI collects or how long it retains it. Privacy policies are often vague, and users rarely read them. Even when they do, opting out is frequently a multi-step process buried in settings menus.

Once your data enters an AI model, you lose control. It can be recombined with other datasets to infer sensitive details — your health status, political leanings, or relationships — without your knowledge.

What Readers Can Do

You do not have to abandon technology to protect your privacy. The following steps are practical and can be implemented today:

  1. Audit your smart devices. Check which apps have permission to access your microphone, camera, or location. On both iOS and Android, you can review and revoke these permissions in the settings menu. Disable microphone access for apps that don’t strictly need it.

  2. Turn off voice recordings. Most smart assistants let you disable the storage of audio recordings. For example, on Amazon Echo, go to Settings > Alexa Privacy > Manage Your Data. You can also set recordings to be deleted automatically after a certain period.

  3. Use privacy-focused alternatives. For messaging, consider Signal or Element, both of which use end-to-end encryption. For web search, DuckDuckGo and Brave Search do not track your queries. If you use a chatbot, look for local AI models that run entirely on your device, such as LLM tools with offline capability.

  4. Opt out of AI training. Many services now allow you to prevent your data from being used to improve their models. OpenAI, for instance, offers a data control panel where you can opt out of training. Google’s “My Activity” page lets you pause data collection for various services.

  5. Encrypt your communications. Use a VPN when on public Wi-Fi, and enable encrypted DNS. For emails, use services like Proton Mail that offer end-to-end encryption.

  6. Delete old accounts and data. Go through your accounts and remove data you no longer need. Services like Google and Facebook have tools to download or delete your information.

None of these steps guarantee complete privacy, but they significantly reduce the amount of personal data that AI systems can collect and leverage. The goal is not paranoia but informed choice.

Sources

  • Clearview AI litigation and regulatory actions (reported by The New York Times and The Guardian, 2020–2024).
  • OpenAI privacy complaints in the European Union (German data protection authority, 2023).
  • Amazon Echo audio review practices (Bloomberg, 2019).
  • “AI’s erosion of privacy” by Heather Parry, Substack, 2026.