5 Things You Should Never Tell an AI Chatbot to Protect Your Finances

AI chatbots like ChatGPT, Google Gemini, and Microsoft Copilot have become part of daily life for many people. They help draft emails, summarize documents, and answer questions. But sharing too much with a chatbot can put your money at risk. Here’s what you need to know and what to keep private.


What Happened

A recent Washington Post column highlighted five types of sensitive information you should never share with an AI chatbot. The advice is based on real-world risks. Meanwhile, a BBC investigation last February showed that a hacker with limited skills could compromise ChatGPT and Google’s AI in about 20 minutes. That demonstration underscores how quickly vulnerabilities can be exploited.

Why It Matters

AI chatbot conversations are typically stored on company servers and may be used for training or reviewed by human moderators. If those systems are breached—or if a scammer gains access to your account—anything you typed becomes visible. Financial scams targeting older adults are rising, according to the National Council on Aging, and criminals constantly look for new ways to steal personal data. Even a seemingly harmless conversation can give them clues for social engineering attacks.

The NerdWallet article on using AI for personal finance warns that while chatbots can help with budgeting or planning, they shouldn’t be treated as a secure vault for sensitive numbers or identity details.

What Readers Can Do

The safest approach: treat every chatbot conversation as if it could be read by a stranger. Here are five things never to type into a chatbot prompt.

1. Full bank account or credit card numbers. Even if you trust the service, entered data may persist in logs. If the chatbot is compromised, those numbers are exposed. Use the chatbot for general guidance, not actual transactions.

2. Social Security number or other government IDs. This includes driver’s license numbers, passport numbers, and tax IDs. Once out of your control, they can be used for identity theft or to open accounts in your name.

3. Passwords, PINs, or security answers. Chatbots are not password managers. Sharing your “mother’s maiden name” or “first pet’s name” gives fraudsters the exact answers they need to bypass account recovery questions.

4. Specific details about your income or assets. While it may be tempting to ask “Can I afford a $400,000 mortgage on my $80,000 salary?” the figures you provide could be pieced together with other data. Scammers may use aggregated financial info to tailor phishing messages that sound convincing.

5. Personal information that could be used for social engineering. This includes your address, date of birth, place of employment, travel plans, or family relationships. A common tactic: a scammer learns you’re about to travel and sends a fake “bank alert” asking you to confirm your card number for overseas use.

What to Do If You’ve Already Shared Sensitive Info

If you realize you’ve typed something you shouldn’t have, take immediate steps:

  • Delete the conversation from the chatbot’s history if the platform allows it. Check the settings for a delete or clear history option.
  • Monitor your financial accounts for unusual activity over the following weeks. Set up transaction alerts.
  • Change passwords on important accounts, especially if you shared security answers. Enable two-factor authentication wherever possible.
  • Place a fraud alert on your credit file if you shared a Social Security number or other high-risk ID. Contact one of the three major credit bureaus to start the process.

Best Practices for Using AI Chatbots Safely

  • Use chatbots only for general, non-identifying questions. For example, “How do I request a chargeback?” instead of “Tell me how to dispute the $47.50 charge on my Capital One card ending in 1234.”
  • Never paste screenshots of account statements, emails, or tax documents into a chatbot. Even if you obscure some numbers, metadata may persist.
  • Disable chat history saving if the platform offers that option. Some services let you use a “temporary” or “incognito” mode.
  • Treat chatbot advice as a starting point, not a final answer. For financial decisions, verify with official sources or a human advisor.

Sources

  • Washington Post column “Don’t tell your AI chatbot these 5 things to keep your money safe” (April 2026)
  • BBC article “I hacked ChatGPT and Google’s AI – and it only took 20 minutes” (February 2026)
  • National Council on Aging “Top 5 Financial Scams Targeting Older Adults and How to Avoid Them” (March 2026)
  • NerdWallet “Should You Use AI for Personal Finance? What to Consider and What to Avoid” (February 2026)