Stop telling your AI chatbot these 5 things – your bank account depends on it
AI chatbots like ChatGPT, Gemini, and Copilot have become everyday tools for drafting emails, summarizing documents, and even getting financial advice. Their convenience is real. But convenience comes with a risk many users overlook: everything you type can be stored, reviewed, and potentially exposed.
The Washington Post recently ran a column outlining five types of information you should never share with an AI chatbot if you want to keep your finances safe. The advice is worth repeating, because the threat isn’t theoretical. Chat history has been accessed by employees of AI companies, leaked in data breaches, and used to build personalized phishing attacks. Here is what you need to know.
What happened
As AI chatbot usage exploded over the past two years, security researchers and journalists began documenting cases where sensitive user data slipped into training logs, was accidentally exposed, or was siphoned by hackers. In one BBC report, a researcher was able to extract personal information from a chatbot’s memory in under 20 minutes. The core problem is that chatbots are designed to remember context – but that memory can be a liability.
Major providers allow you to delete chat history, but deleted data may remain on servers for some time. And unless you explicitly opt out (where possible), your conversations can be used to improve the model, meaning human reviewers or automated systems may read them.
Why it matters
For scammers, a single chatbot conversation can be a goldmine. If you’ve typed your full name, address, bank name, or answers to common security questions, that information can be used to reset passwords, impersonate you to customer service, or craft a convincing email that mentions details only you would know.
A 2025 NerdWallet article on using AI for personal finance warned that sharing detailed financial plans with a chatbot could lead to targeted scams. The more a fraudster knows about your savings goals, investment accounts, or upcoming large purchases, the easier it is to design a believable pitch.
The bottom line: once you submit text to a chatbot, you lose control over where it ends up.
What readers can do: 5 things never to share
Here are the five categories of information the Washington Post column highlighted – and some practical alternatives.
1. Social Security numbers and government ID numbers
Your Social Security number (or equivalent in other countries) is the master key to your identity. Scammers can use it to file tax returns, open credit cards, or take out loans in your name. Do not type it into any chatbot, even if you think you’re anonymizing it by leaving out a digit. Patterns can be reconstructed.
What to do instead: If you need help filling out a tax form or understanding a government document, ask the chatbot general questions about the process without entering your actual ID number.
2. Bank account numbers and credit card details
These numbers are used for payments and transfers. If a chatbot stores them and later leaks, anyone with access could use them to authorize charges or initiate fraudulent transfers.
What to do instead: For financial advice, use a chatbot to understand terms and concepts, but never paste in a bank statement or credit card number. Treat chatbots like public bulletin boards.
3. Answers to common security questions
“What is your mother’s maiden name?” “What was the name of your first pet?” “What street did you grow up on?” – these are the exact questions banks and email providers use to verify your identity. If you provide these answers to a chatbot, you are handing over the keys to your account recovery process.
What to do instead: Make up fake answers for security questions and store them in a password manager. Never tell those fake answers to anyone, including a chatbot.
4. Personal documents (IDs, tax returns, pay stubs)
Uploading a photo of your driver’s license or a scanned tax form to a chatbot for help with a question might save time, but it also copies every detail – your photo, address, issue date, signature – into a system you don’t control.
What to do instead: Use a dedicated secure service (like a password-protected PDF or a government portal) when you need help with official documents. Chatbots are not document-safe zones.
5. Detailed financial plans and investment strategies
Describing your entire portfolio, your net worth, or your plans for an inheritance can give scammers exactly the information they need to craft a targeted email offering a “better investment” or warning of a “security breach” at your brokerage. Phishing attacks that include personal details are far more convincing.
What to do instead: Keep high-level financial discussions generic. If you need personalized financial advice, use a human advisor or a reputable service that encrypts your data and has a clear privacy policy.
A final note on safety
No list is complete without acknowledging that chatbots can still be useful for many everyday tasks – recipe ideas, travel planning, research summaries. The trick is to know where the line is. If a question asks you to type something you’d be uncomfortable posting on a public forum, don’t type it into a chatbot.
And if you have already shared sensitive information? Go into your chatbot’s settings and delete your conversation history. Then change your security questions and passwords as a precaution. It’s not a perfect fix, but it reduces the window of exposure.
Sources
- Washington Post column: “Don’t tell your AI chatbot these 5 things to keep your money safe” (April 2026)
- BBC: “I hacked ChatGPT and Google’s AI – and it only took 20 minutes” (February 2026)
- NerdWallet: “Should You Use AI for Personal Finance? What to Consider and What to Avoid” (February 2026)