Keep Your Money Safe: 5 Things You Should Never Tell an AI Chatbot

AI chatbots like ChatGPT, Claude, and Google’s Gemini have become everyday tools for drafting emails, summarizing documents, and even brainstorming financial decisions. The convenience is real. But the risks are often overlooked.

A recent column in The Washington Post (April 25, 2026) laid out five categories of information you should never share with an AI chatbot if you want to keep your money safe. The advice is worth taking seriously. As the BBC demonstrated in a February 2026 investigation, a researcher was able to trick ChatGPT and Google’s AI into revealing personally identifiable information within 20 minutes—simply by asking the right questions in the right way.

What happened

The Post article, based on interviews with security researchers and consumer protection experts, identifies specific types of data that pose high financial risk when shared with chatbots. These aren’t hypothetical scenarios. NerdWallet, in a February 2026 guide on using AI for personal finance, warned that “AI tools are not designed to keep secrets the way a bank or a financial advisor does.” Data entered into a chatbot may be stored for training, reviewed by human moderators, or accessed through a data breach. The National Council on Aging also lists AI-related scams among the top threats targeting older adults.

Why it matters

The problem isn’t that the chatbot itself is malicious. It’s that the information you provide can be used against you in a variety of ways:

  • Scammers can trick the chatbot into revealing details you’ve shared previously.
  • Your conversation history could be compromised if your account is hacked.
  • Some chatbots retain logs that employees or contractors can review.
  • Criminals can use small pieces of personal data—like your mother’s maiden name or the bank you use—to build a profile for identity theft or social engineering attacks.

Once that information is out there, you can’t take it back. You can delete a conversation, but you have no control over whether copies still exist on the service’s servers or have already been fed into training data.

What readers can do

Here are the five categories of information the Washington Post article warns you to keep out of any AI chatbot conversation.

1. Full bank account or credit card numbers
Even if you’re using a chatbot to help you track spending or compare credit cards, never type your actual account numbers. A chatbot can work just fine with approximate figures or labels like “my main checking account.” The risk of a data leak or misuse far outweighs the tiny convenience gain.

2. Social Security or government ID numbers
This should be obvious, but people sometimes paste tax forms or ID documents into chatbots to ask for help interpreting them. Do not do that. Even partial digits—like the last four—can be combined with other data to authenticate into your accounts.

3. Passwords, PINs, or security answers
Never share passwords or PINs. And be careful with security questions: if you tell a chatbot your mother’s maiden name or the street you grew up on, you’ve just handed over answers that many financial institutions use for identity verification. Use generic descriptions instead (“my first pet’s name” rather than “Fluffy”).

4. Personal financial documents like tax returns or bank statements
These documents contain a dense mix of sensitive data: your full name, address, employer, account numbers, and income details. A 2026 NerdWallet analysis confirmed that even anonymized versions can be re-identified by determined attackers. If you need help understanding a document, try reading it yourself first, or consult a professional directly.

5. Login credentials for financial accounts
No legitimate financial tool or AI assistant should ever ask for your username and password. If a chatbot seems to request them—even as part of a “simulation”—stop immediately. That is a red flag for a scam or a data harvesting attempt.

Best practices for safe AI use

  • Treat every chatbot conversation as if it could become public tomorrow.
  • Use generic placeholders for personal information (e.g., “my savings account with about $5,000”).
  • Review your chatbot’s privacy settings and data retention policies. Most allow you to disable conversation history saving.
  • Never paste full documents or screenshots containing sensitive details.
  • If you suspect you’ve already shared something sensitive, change related passwords and enable two-factor authentication on your financial accounts.

Sources

  • Washington Post, Column | Don’t tell your AI chatbot these 5 things to keep your money safe, April 25, 2026.
  • BBC, I hacked ChatGPT and Google’s AI – and it only took 20 minutes, February 18, 2026.
  • NerdWallet, Should You Use AI for Personal Finance? What to Consider and What to Avoid, February 27, 2026.
  • National Council on Aging, Top 5 Financial Scams Targeting Older Adults and How to Avoid Them, March 17, 2026.