Five Things You Should Never Tell an AI Chatbot to Protect Your Finances
AI chatbots like ChatGPT, Google Bard, and Microsoft Copilot are becoming everyday tools. People use them to draft emails, summarize articles, and even brainstorm financial decisions. But as their popularity grows, so do the risks. A recent column in The Washington Post warned users about sharing sensitive financial details with chatbots, and other outlets such as NerdWallet and the National Council on Aging have echoed similar cautions. If you’re not careful, a casual conversation with an AI assistant could put your bank account or identity in danger.
What happened
The concern isn’t hypothetical. In early 2026, a BBC journalist demonstrated how easily AI chatbots can be tricked. By using a technique called prompt injection, they extracted information that the model was not supposed to reveal. Security researchers have shown that conversation logs—which many chatbots store by default—can be accessed in a data breach or by unauthorized parties. The companies behind these tools often retain your inputs to improve their models, meaning anything you type may be saved and reviewed.
NerdWallet’s cautionary piece “Should You Use AI for Personal Finance?” noted that AI models are not designed to keep secrets. They cannot promise confidentiality the way a bank or a doctor can. And the National Council on Aging listed “phishing and spoofing” as top scams targeting older adults—exactly the kind of attack that can be launched with personal details gleaned from chatbot logs.
Why it matters
Your financial safety depends on keeping certain information private. Chatbots do not have built-in encryption or privacy protections that rival what you would expect from a financial institution. If your account recovery questions, bank account numbers, or Social Security number end up in a chatbot’s training data, there is no way to fully delete them. Scammers can also use location details or daily routines you mention to craft convincing social engineering attacks.
The core issue is that many people treat AI chatbots like a trusted assistant, forgetting that the assistant is essentially a public-facing database that learns from everything you share.
What readers can do
Here are five categories of information you should never share with an AI chatbot—plus a few general practices to stay safe.
1. Full account numbers and passwords
Never type your bank account number, credit card number, or any password into a chatbot. If that data is exposed in a breach, criminals can drain your accounts. Even if the model itself is secure, your inputs may be visible to company employees or used to train future versions. A rule of thumb: if you would not put it on social media, do not tell it to a chatbot.
2. Security question answers
Many financial and email accounts rely on questions like “What is your mother’s maiden name?” or “What was your first pet’s name?” for password resets. If you answer those questions in a chatbot, you are handing over the keys to your account. Scammers can use the same details to impersonate you to customer service.
3. Sensitive documents
Uploading a tax return, bank statement, or ID photo to a chatbot is risky. Even if the service promises secure handling, there have been cases of AI tools inadvertently storing files without adequate protection. The BBC’s hacking demonstration showed that a malicious actor could potentially extract data from a model’s memory. Keep financial documents off chat platforms.
4. Detailed location and routine
Telling a chatbot your home address, work schedule, or travel plans may seem harmless, but that information can be used for targeted scams or physical theft. If a scammer knows you are on vacation, they can pose as a utility company or bank and ask for “urgent” payments with more credibility. Keep location details out of your conversations.
5. Financial decisions and transactions
Avoid asking a chatbot to transfer money, trade stocks, or provide specific investment advice. AI models can hallucinate—generate plausible-sounding but incorrect information—and they have no legal responsibility for the outcomes. NerdWallet’s article points out that while chatbots can help with general budgeting tips, they should never be relied upon for specific financial actions. If you need to execute a transaction, use your bank’s official app or website.
General best practices
- Delete chat history regularly, or use the incognito mode offered by some services.
- Check your chatbot’s privacy settings and turn off data sharing for model training if possible.
- Use dummy data for any non-essential information. For example, if a chatbot asks for your location for a weather query, give a generic city rather than your exact address.
- Never connect a chatbot to your email or bank account via plugins or third-party extensions unless you fully trust the integration and have verified its security.
Sources
This article draws on reporting from The Washington Post (April 2026), NerdWallet’s consumer finance guidance, the National Council on Aging’s scam prevention resources, and the BBC’s demonstration of AI vulnerabilities. All cited information is publicly available as of the time of writing.
Staying safe with AI is not about avoiding the technology—it’s about using it with clear boundaries. Treat chatbots like a helpful stranger, not a confidant, and your finances will be better off.