Five Things You Should Never Tell an AI Chatbot to Keep Your Money Safe

AI chatbots like ChatGPT, Google Bard, and Microsoft Copilot have become go‑to tools for quick answers, help with writing, and even personal finance advice. They can simplify tasks, from drafting emails to summarizing investment options. But that convenience comes with a real privacy trade‑off. The conversations you have with these bots are often stored, sometimes reviewed by human trainers, and could be exposed in a data breach. Knowing what not to share is the first step to protecting your money and identity.

What happened

In April 2026, The Washington Post published a column warning users about five specific types of information they should never disclose to an AI chatbot. The column cited growing concerns over how chatbot providers store and use conversation data, as well as several high‑profile incidents — including a BBC report from February 2026 showing a hacker managed to extract sensitive data from ChatGPT in just 20 minutes. The National Council on Aging also recently highlighted how fraudsters are using AI‑generated content to target older adults, and NerdWallet’s February 2026 guide cautioned that while AI can help with financial planning, it should never be trusted with raw personal data.

Why it matters

Chatbot conversations are not private in the way a one‑on‑one conversation with a human is. Providers can retain logs for months, use them to improve models, and may be compelled to hand them over to law enforcement. If a data leak occurs — as has happened with other cloud services — your full financial details could end up in the wrong hands. Beyond breaches, there’s also the risk that someone with access to your account could read past chats. Even if you trust the company today, you’re effectively sharing sensitive information with a system that can be exploited later.

What readers can do

Here are five categories of information you should never reveal to an AI chatbot, along with practical steps you can take to stay safer.

1. Passwords, PINs, and security codes
No chatbot should ever know your login credentials. If you need help generating a strong password, ask for a random example that you never actually use. Never paste an existing password into a chat.

2. Full credit card, bank, or account numbers
Even if you’re seeking advice on budgeting or a specific transaction, use generic numbers (e.g., “a $500 purchase”) rather than your real card details. The same goes for routing and account numbers.

3. Social Security Numbers, Tax IDs, and government ID numbers
These are prime targets for identity theft. No legitimate financial planning tool needs your Social Security Number inside a chatbot conversation. Anonymize any scenario.

4. Answers to common security questions
“Mother’s maiden name,” “first pet’s name,” “birthplace,” “street you grew up on” — these are classic verification questions. If you type them into a chat, anyone who gains access to that log can impersonate you.

5. Full address, birthdate, driver’s license number
These details can be combined to steal your identity. If you need location‑specific advice, give a region (e.g., “Pacific Northwest”) rather than your exact address.

Beyond these five rules, there are general habits to adopt:

  • Use dummy data when seeking hypothetical advice.
  • Clear your chat history periodically (most services allow this).
  • Turn off conversation history saving if you’re discussing anything sensitive — though even that may not guarantee deletion.
  • Never assume confidentiality; treat every conversation as if it could be read back.

Sources

  • The Washington Post, “Column: Don’t tell your AI chatbot these 5 things to keep your money safe,” April 2026.
  • BBC, “I hacked ChatGPT and Google’s AI – and it only took 20 minutes,” February 2026.
  • National Council on Aging, “Top 5 Financial Scams Targeting Older Adults and How to Avoid Them,” March 2026.
  • NerdWallet, “Should You Use AI for Personal Finance? What to Consider and What to Avoid,” February 2026.