The Five Details You Should Never Share With Your AI Chatbot

AI chatbots like ChatGPT, Gemini, and Copilot have become everyday tools for answering questions, summarizing documents, and even helping with personal finance. They are convenient, fast, and often free. But their convenience comes with a trade-off that many users overlook: every prompt you type may be stored, reviewed by human moderators, or used to train future versions of the model. And if that prompt contains sensitive financial information, the consequences can be costly.

Recent reports have underscored the risks. A BBC investigation showed that researchers were able to extract personal data from ChatGPT and Google’s Gemini in roughly 20 minutes using simple prompt engineering. The Washington Post’s personal finance columnist recently laid out five categories of information you should never feed an AI chatbot if you want to keep your money safe. Here is a practical breakdown of those risks — and what you can do instead.

What Happened

The Washington Post column, published in April 2026, highlighted a growing concern among cybersecurity experts: users are treating chatbots like trusted financial advisors while forgetting that these services are not designed for confidential data handling. The column’s five prohibited categories included Social Security numbers, bank account numbers, passwords, full credit card numbers, and answers to common security questions.

This warning was reinforced by other recent research. NerdWallet published guidance on using AI for personal finance, emphasizing that chatbots lack end-to-end encryption and that their training data can inadvertently leak private information. Meanwhile, the BBC demonstrated that even basic hacking attempts could retrieve chat history that contained sensitive details. These are not theoretical risks — they have been demonstrated in controlled tests and, in some cases, in real-world data breaches.

Why It Matters

The core issue is that most mainstream chatbots store user conversations. These logs can be accessed by company employees for quality assurance, retained for model retraining, or exposed in a breach. Unlike a bank’s secure messaging system or a password manager, consumer chatbots are not built with financial privacy in mind.

Consider what happens if you ask a chatbot to help you “track my savings” and provide your bank account number. That number is now stored on company servers, potentially forever. If the service suffers a data leak — and many have — the number becomes part of a public dataset. Similarly, sharing your mother’s maiden name or your first pet’s name to “test” whether a chatbot can guess security questions is effectively handing over the keys to accounts that rely on those answers.

The stakes are especially high for older adults, who are frequent targets of financial scams. The National Council on Aging reported that scams tailored using AI-generated personal information are on the rise. When you supply a chatbot with enough context about your finances, you are essentially feeding the same data that scammers could use to impersonate you.

What Readers Can Do

You do not need to stop using AI chatbots. You just need to set clear boundaries on what you share. Here are five specific rules:

  1. Never share government-issued IDs or numbers. Do not type your Social Security number, driver’s license number, or passport number. Even if you tell the chatbot to “forget” it, the prompt remains in your history.

  2. Never share full account numbers. Avoid pasting bank account numbers, credit card numbers, or investment account numbers. If you need help categorizing a transaction, use only the last four digits — and even then, consider whether it is necessary.

  3. Never share passwords or PINs. No legitimate financial tool will ask you to enter a password into a chatbot. Treat any chatbot request for a password as a red flag.

  4. Never share security question answers. Do not provide your mother’s maiden name, the street you grew up on, or any other common security question response. Once shared, that information is out of your control.

  5. Never share your full financial picture in one prompt. Avoid assembling a complete view of your income, debts, accounts, and goals in a single conversation. If the chat history is exposed, the attacker gets your entire financial profile.

Instead, use dedicated financial tools for sensitive tasks. Bank apps, password managers, and encrypted note-taking apps are designed with security in mind. While a chatbot can explain a budgeting concept or summarize a public article, it should never handle your personal data.

It is also worth checking your chatbot’s privacy settings. Most platforms allow you to opt out of having your data used for training. Turning off chat history or enabling “temporary” mode (where available) reduces the permanence of your conversations.

Sources

  • “Column | Don’t tell your AI chatbot these 5 things to keep your money safe,” The Washington Post, April 25, 2026.
  • “I hacked ChatGPT and Google’s AI — and it only took 20 minutes,” BBC, February 18, 2026.
  • “Should You Use AI for Personal Finance? What to Consider and What to Avoid,” NerdWallet, February 27, 2026.
  • “Top 5 Financial Scams Targeting Older Adults and How to Avoid Them,” National Council on Aging, March 17, 2026.