Privacy in the Age of AI: What Tech Experts Are Buying to Protect Their Data
As artificial intelligence becomes embedded in more products—smartphones, search engines, photo editors, even toasters—the line between convenience and surveillance keeps blurring. A recent article in VICE titled “AI Is Getting Creepy—Here’s What Tech Experts Are Buying to Stay Private” taps into a growing unease: the tools we rely on are also the ones collecting massive amounts of personal data. Whether it’s your email provider scanning messages to train its chatbots or your photo app using facial recognition on uploaded images, the average user has less control than they might think.
The good news is that privacy-focused professionals have been dealing with these threats for years, and many have settled on a short list of purchases and habits that meaningfully reduce exposure. Below is a rundown of what they’re actually buying—and doing—to keep their data out of AI’s reach.
What Happened
The VICE piece, published in late April 2026, highlights a shift in how security researchers, journalists, and privacy advocates approach everyday technology. AI-powered data collection has become so pervasive that even people who build the systems are turning to alternative products. The article reportedly includes interviews with several experts who walk through their personal tech stacks and the reasoning behind each choice.
While the original piece is behind a paywall, the core message is clear: the era of trusting defaults is over. Default browsers, default email apps, default cloud storage—all of them now feed into AI training pipelines or behavioral advertising networks. Experts are instead opting for tools that minimize data transmission and keep processing local when possible.
Why It Matters
The creepiness of AI isn’t just a vague feeling. Several real incidents have crystallized the risk:
- Photo apps like Google Photos and Apple Photos have been caught using uploaded images to improve facial recognition algorithms, often without explicit opt-in.
- Voice assistants (Siri, Alexa, Google Assistant) store recordings in the cloud, and contractors have listened to private conversations.
- Free email services scan message content to train spam filters and AI models, which means your private correspondence may end up as training data.
The underlying problem is that most consumer AI features rely on sending data to remote servers. Once it leaves your device, you lose control over how it’s stored, shared, or reused. The experts’ approach is to stop that flow at the source.
What Readers Can Do
You don’t need to become a privacy hermit. A few targeted purchases and habit changes can go a long way.
Hardware and Software Experts Are Buying
- Privacy-focused browsers: Firefox with strict tracking protection, Brave, or the open-source Mullvad Browser. These block third-party trackers and fingerprinting scripts that AI ad networks use.
- VPNs that don’t log: Mullvad VPN and Proton VPN are common recommendations because they accept anonymous payments and have been independently audited. A VPN hides your IP address from websites, reducing the profile of data that AI marketers can build.
- End-to-end encrypted cloud storage: Proton Drive, Sync.com, or Tresorit allow you to store files without the provider being able to read them. This prevents your documents from being scraped for AI training.
- Local AI assistants: Instead of using cloud-based chatbots like ChatGPT or Gemini, some experts run local models (e.g., through Ollama or LM Studio) on their own machines. Your questions never leave your laptop.
- Hardware with physical kill switches: Laptops and phones that have a physical camera shutter or a hardware switch for the microphone are growing in popularity. They guarantee that no software can turn them on remotely.
Free Habits That Make a Difference
- Review app permissions: Go through your phone’s settings and revoke camera, microphone, and location access for apps that don’t need them. Many games and utility apps demand these permissions unnecessarily.
- Use burner accounts: For any service that doesn’t require real identity (like newsletters or forum comments), create a separate email alias. Services like SimpleLogin or DuckDuckGo’s email protection let you do this easily.
- Turn off “improve the product” settings: Almost every tech company has an opt-in for sharing usage data and content to improve AI. Dig into your settings and disable these. The experience won’t get worse—you’ll just lose a few “recommended” features.
- Prefer local over cloud: When possible, use offline versions of apps. For example, edit photos in a local tool like Darktable rather than uploading to Adobe’s cloud, which uses your images to train its AI.
What to Keep in Mind
No tool guarantees complete privacy. A VPN doesn’t protect you from the websites you log into. Encrypted storage won’t stop you from voluntarily sharing a file. And local AI models may be less capable than cloud ones. The goal is defense in depth—layering several small protections so that no single breach exposes everything.
The VICE article and the experts it cites stress that the most effective step is simply to become more aware of where your data goes. Treat each new app or service as a potential leak until proven otherwise.
Sources
- VICE. “AI Is Getting Creepy—Here’s What Tech Experts Are Buying to Stay Private.” April 29, 2026.
- Pew Research Center. “Experts Say the ‘New Normal’ in 2025 Will Be Far More Tech-Driven,” February 2021. (Context on long-term trends.)
- Mozilla Foundation. “Privacy Not Included” guides on browsers and AI assistants. (General reference for privacy practices.)