Never Share These 5 Personal Details With an AI Chatbot — Or Your Money Could Be at Risk

AI chatbots like ChatGPT, Google Gemini, and Claude have become everyday tools for quick answers, drafting emails, and even managing personal tasks. But as adoption grows, so do warnings from security experts about a simple mistake: treating these platforms like private, confidential assistants.

In April 2026, The Washington Post published a column titled “Don’t tell your AI chatbot these 5 things to keep your money safe,” drawing on recent research and incident reports. The core message is straightforward: chatbots store and sometimes share your conversations, and that data can be accessed by hackers, used in targeted scams, or even accidentally exposed by the service itself.

What happened

Several high-profile incidents have underscored the risk. In February 2026, the BBC reported that a security researcher was able to hack both ChatGPT and Google’s AI in under 20 minutes—demonstrating how quickly an attacker could extract sensitive information if it existed in a chat history. Meanwhile, the National Council on Aging highlighted in its March 2026 “Top 5 Financial Scams” report that AI-generated content is increasingly used in personalized phishing attacks. And NerdWallet’s analysis from February 2026 warned that while AI can assist with basic financial questions, it should never be treated as a secure vault for personal data.

Why it matters

The danger is not that the chatbot itself is malicious, but that users treat it with the same trust they would a human confidant or banker. Chat services log conversations for training and improvement, and those logs can be searched or leaked. Even temporary chats in “incognito” mode may still be retained for a period. If you type your bank account number, security answer, or password into a prompt, you’ve essentially broadcasted that data to a system that could be compromised.

BBB and law enforcement agencies specifically caution that scammers now use AI tools to mimic the tone and content of real conversations—so if you share that you’re about to make a large withdrawal, a threat actor could use that detail in a follow-up phishing call.

What readers can do

The Washington Post and other experts agree on five categories of information you should never disclose to an AI chatbot:

1. Bank account numbers and routing numbers. Even if you’re asking for help balancing a checkbook, do not paste an actual account number. Use fictional placeholders like “$500 in checking.”

2. Your Social Security Number or Tax ID. No chatbot needs this number for any legitimate financial task. If a prompt asks for it, that’s a red flag.

3. Login credentials or passwords. Never type a password, even in an example. Treat every chatbot session like a public conversation.

4. Answers to common security questions. It may seem harmless to say “my mother’s maiden name is Smith” or “my first pet was Max,” but many accounts still use those exact phrases as password resets. If an attacker gets your chat logs, they’ve unlocked your recovery chain.

5. Real-time investment or trading decisions. It’s tempting to ask “Should I sell my Apple shares now?” while logged into your brokerage. But that information can be scraped or used in market manipulation schemes. Keep trading decisions private and offline.

How to use chatbots safely

  • Assume everything you type can be read by someone, someday. Use generic examples instead of real data.
  • Avoid logging into third-party services via a chatbot. If you want to check a balance, go directly to your bank’s website or app.
  • Use disposable email accounts or temporary sessions for chatbot interactions that require any personal info.
  • Regularly clear your chat history, especially if the platform saves it by default.
  • Consider separate chatbots for work vs. personal tasks to limit cross-contamination of sensitive data.

Simply put, treat a chatbot like a public terminal in a coffee shop: fine for general questions, no place for private or financial details.

Sources

  • The Washington Post, “Don’t tell your AI chatbot these 5 things to keep your money safe,” April 25, 2026.
  • National Council on Aging, “Top 5 Financial Scams Targeting Older Adults,” March 17, 2026.
  • NerdWallet, “Should You Use AI for Personal Finance?,” February 27, 2026.
  • BBC, “I hacked ChatGPT and Google’s AI – and it only took 20 minutes,” February 18, 2026.