Things You Should NEVER Tell ChatGPT: AI chatbots like ChatGPT, Gemini, and Claude have become indispensable tools for millions. They can draft emails, brainstorm ideas, write code, and even help you plan dinner.
But in our eagerness to leverage their power, it’s easy to forget that these aren’t just sophisticated search engines; they’re conversational AI with implications for your privacy and security.
Table of Contents
8 Things You Should NEVER Tell ChatGPT (or Any AI Chatbot)
Think of it this way: you wouldn’t shout your social security number across a crowded room. Similarly, what you input into an AI chatbot isn’t always as private as you assume.
While developers like OpenAI, Google, and Anthropic have robust security measures, the very nature of how these models learn means your data could be used in ways you might not expect.

So, before you hit “send” on that next prompt, take a pause. Here are 8 categories of information you should NEVER tell ChatGPT or any other AI chatbot.
1. Your Sensitive Personal Identifiable Information (PII)
This is the golden rule. Never, ever input your Social Security Number (SSN), credit card details, bank account numbers, passport information, driver’s license number, or any other government ID.
While it might seem convenient to ask an AI to help you draft a document that includes these, it’s an unnecessary risk. This data is the primary target for identity theft.
2. Confidential Company or Client Data
In today’s competitive landscape, intellectual property and client confidentiality are paramount. Do not paste proprietary code, unreleased product designs, internal strategy documents, sales figures, or any sensitive client information into a public chatbot. Most terms of service state that submitted data can be used to train future models.
Imagine your company’s secret sauce ending up in a public AI’s knowledge base! If you must use AI for work-related tasks, ensure your company has approved a secure, enterprise-grade AI solution.
3. Medical or Health Information
While AI can be a fantastic research tool for understanding symptoms or medical conditions, it is absolutely not a substitute for professional medical advice. More importantly, never share your personal health history, diagnoses, medication lists, or any other protected health information (PHI).
This data is highly sensitive and protected by laws like HIPAA. Inputting it into a general AI chatbot is a direct violation of your privacy and could have serious repercussions if exposed.
4. Legal Case Details or Attorney-Client Privileged Information

Just as with medical data, legal matters demand absolute confidentiality. Do not upload case briefs, client communications, contract drafts containing sensitive terms, or any information that falls under attorney-client privilege.
The risk of waiving that privilege or exposing sensitive legal strategy is far too high. AI is a tool, not a confidante for legal battles.
5. Passwords or Login Credentials
This might seem obvious, but in moments of desperation (e.g., “Help me generate a strong password and remember it!”), you might be tempted. Don’t. AI chatbots are not secure password managers.
Storing or generating passwords with them, even temporarily, is a massive security vulnerability. Use a dedicated, encrypted password manager instead.
6. Personal Financial Information (Beyond PII)
While PII covers account numbers, this extends to detailed investment portfolios, specific transaction histories, tax documents, or any comprehensive financial planning details.
Asking an AI to analyze your tax return (even if you redact PII) still exposes a vast amount of sensitive financial data that could be misused if accessed by malicious actors or inadvertently used for model training.
7. Any Information You Wouldn’t Want Publicly Attributed to You
This is a broad but crucial category. If you wouldn’t feel comfortable seeing a piece of information tied to your name on the front page of a newspaper, do not put it into an AI chatbot.
This includes deeply personal opinions, potentially damaging criticisms of individuals, or details of private conversations.
Remember, what goes into the AI might, in some form, come out later or be used to shape the AI’s future responses.
8. Illicit or Harmful Content
This should go without saying, but never use an AI chatbot to generate, discuss, or plan anything illegal, unethical, or harmful. This includes prompts related to illegal activities, hate speech, self-harm, or child exploitation.
These models have guardrails in place to prevent such outputs, but attempting to circumvent them can have legal consequences and contributes to a less safe digital environment.
Stop Using AI Wrong – 30 Techniques You’re Missing
The Bottom Line: Assume Limited Privacy
The general rule of thumb when interacting with public AI chatbots is to assume anything you type might not be entirely private. Developers are constantly working to enhance security and privacy, but no system is foolproof.
For truly sensitive tasks, either refrain from using AI, or ensure you’re utilizing an AI solution with explicit, robust data privacy agreements and enterprise-level security protocols that guarantee your data is not used for model training or exposed to third parties.
AI is an incredible technological leap, but like any powerful tool, it demands responsible use. Be smart, be safe, and protect your digital footprint.
You may join my Twitter Account for more news updates, Wordle, and more game answers and hints daily.
AI in Gaming: Friend or Foe? Navigating the New Digital Frontier of Play