Back to blog
Security
8 min

How to Use AI Tools Safely: A Practical Guide to Protecting Your Data

A step-by-step guide to using ChatGPT, Claude, Copilot and other AI tools without compromising your sensitive data. Privacy settings, best practices, and alternatives.

LOCK.PUB

How to Use AI Tools Safely: A Practical Guide to Protecting Your Data

AI tools like ChatGPT, Claude, and GitHub Copilot have become essential for productivity. But every time you interact with these tools, you're potentially sharing data with their servers — data that could be stored, used for training, or even exposed in a breach.

This guide provides practical, actionable steps to use AI tools while protecting your sensitive information.

The Golden Rule: Assume Everything Is Public

Before we dive into specific settings and tools, internalize this principle:

Treat every prompt you send to an AI tool as if it could become public.

This means:

  • It could be read by employees of the AI company
  • It could appear in a data breach
  • It could influence responses to other users
  • It could be subpoenaed in legal proceedings

If you wouldn't post it publicly, don't paste it into a chatbot.

Step 1: Configure Privacy Settings

ChatGPT (OpenAI)

Disable training on your data:

  1. Go to Settings → Data Controls
  2. Turn OFF "Improve the model for everyone"
  3. Your conversations will no longer be used to train future models

Use Temporary Chats:

  • Click the toggle for "Temporary Chat" before sensitive conversations
  • These chats aren't saved to your history and aren't used for training

API vs. Consumer:

  • OpenAI doesn't train on API usage by default
  • Consider using the API directly for sensitive applications

Claude (Anthropic)

Paid plans:

  • Claude Pro/Team/Enterprise conversations are NOT used for training by default
  • Review Anthropic's data usage policy for your specific plan

Free tier:

  • Conversations may be used for training
  • Use Claude's temporary features when available

Google Gemini

Turn off activity saving:

  1. Go to Gemini Apps Activity
  2. Turn OFF activity saving
  3. Set auto-delete to the shortest period

Microsoft Copilot

Enterprise users:

  • Copilot for Microsoft 365 has stronger data protections
  • Consumer Copilot has more permissive data policies
  • Use your organization's Copilot instance for work data

Step 2: Understand What NOT to Share

Never paste into AI chatbots:

Category Examples Risk
Credentials Passwords, API keys, tokens Direct account compromise
Personal info SSN, credit cards, medical records Identity theft, HIPAA violations
Company secrets Source code, customer data, financials Trade secret loss, compliance breach
Private communications Emails, DMs you're helping to write Privacy violations

Red flags in your prompts:

  • Any string that starts with sk-, AKIA, ghp_, etc. (likely API keys)
  • Anything with @company.com email domains
  • Database connection strings
  • Real names with personal details
  • Unredacted screenshots

Step 3: Redact Before You Share

When you need AI help with code or documents containing sensitive info:

Replace real values with placeholders:

# Instead of:
api_key = "sk-proj-abc123xyz789"
db_password = "MyR3alP@ssword!"

# Use:
api_key = "YOUR_OPENAI_API_KEY"
db_password = "YOUR_DATABASE_PASSWORD"

Anonymize personal information:

# Instead of:
"John Smith from Acme Corp ([email protected]) requested..."

# Use:
"User A from Company X ([email protected]) requested..."

Generalize before asking:

Instead of: "Why is my AWS key AKIAIOSFODNN7EXAMPLE not working?"

Ask: "What are common reasons an AWS access key might stop working?"

Step 4: Choose the Right Tool for Sensitivity Level

Low sensitivity (public info, generic questions):

  • Consumer AI tools are fine
  • ChatGPT, Claude, Gemini free tiers
  • No special precautions needed

Medium sensitivity (internal processes, non-secret code):

  • Enable privacy settings
  • Use temporary/incognito modes
  • Redact specific details

High sensitivity (credentials, PII, trade secrets):

  • Use Enterprise tiers with DPAs
  • Consider local/self-hosted models
  • Or don't use AI at all for this data

Enterprise AI Options:

Service Key Protection Compliance
ChatGPT Enterprise No training on data, SOC 2 GDPR, HIPAA-ready
Claude Enterprise DPA available, no training SOC 2, GDPR
Azure OpenAI Data stays in your Azure Full enterprise compliance
AWS Bedrock Your VPC, your data Full enterprise compliance

Step 5: Handle Credentials Properly

Never share credentials through AI tools or regular messaging

When you need to share sensitive credentials with colleagues:

Bad practices:

  • Pasting in Slack/iMessage/WhatsApp (messages are stored)
  • Putting in shared documents (persistent access)
  • Emailing (often unencrypted, searchable)
  • Pasting in AI chatbots (potentially used for training)

Better practices:

  • Use a password manager's sharing feature
  • Use your organization's secret management tool
  • Share through encrypted, self-destructing channels

Using Secure, Expiring Links

Services like LOCK.PUB allow you to:

  1. Create a password-protected note
  2. Set an expiration time (1 hour, 24 hours, etc.)
  3. Make it self-destruct after being viewed once
  4. Share the link through one channel and the password through another

Example workflow:

  • You need to share a database password with a new team member
  • Create a secure note with the password
  • Set it to expire in 1 hour and delete after viewing
  • Send the link via email, the password via a different channel
  • The credential can't be retrieved again after they view it

This is infinitely safer than putting the credential in an email, chat message, or AI prompt.

Step 6: Consider Local AI Models

For truly sensitive work, consider running AI models locally:

Options for local AI:

Ollama + Open Source Models:

  • Run Llama, Mistral, or other models locally
  • Zero data leaves your machine
  • Good for code review, writing assistance

GPT4All:

  • Desktop app for running local models
  • No internet required
  • Suitable for offline environments

LocalAI:

  • OpenAI API-compatible local server
  • Can drop into existing workflows

Trade-offs of local models:

  • Less capable than GPT-4 or Claude
  • Requires decent hardware (GPU recommended)
  • No internet access = no external knowledge
  • But: complete privacy

Step 7: Implement Team Policies

If you manage a team, establish clear guidelines:

Approved tools and tiers:

  • Which AI services can be used
  • Which tier (free vs. enterprise) for which data
  • How to get exceptions approved

Data classification:

  • What types of data can be discussed with AI
  • What requires redaction
  • What is completely prohibited

Training requirements:

  • Annual AI security awareness training
  • Updates when new risks emerge
  • Incident response procedures

Audit and monitoring:

  • Log AI tool usage where possible
  • Review for policy compliance
  • Learn from incidents

Quick Reference: AI Safety Checklist

Before sending any prompt to an AI tool, ask yourself:

  • Would I be comfortable if this prompt became public?
  • Have I removed all passwords, API keys, and tokens?
  • Have I anonymized personal information (names, emails, IDs)?
  • Have I redacted company-specific details that aren't necessary?
  • Am I using the appropriate tier for this data's sensitivity?
  • Have I enabled privacy settings (training opt-out, temporary chat)?

If you can't check all boxes, stop and redact before proceeding.

Learn More About Specific Risks

This guide covered general best practices. For deeper dives into specific threats, see our related posts:

The Bottom Line

AI tools are incredibly powerful, but they require thoughtful use. The convenience of instantly getting help from ChatGPT isn't worth a data breach, a compliance violation, or leaked credentials.

Your action items:

  1. Configure privacy settings on all AI tools you use today
  2. Establish a personal rule: no credentials, no PII, no secrets in AI prompts
  3. Use expiring, encrypted channels for sharing sensitive data
  4. Train your team on AI safety best practices
  5. Consider local models for the most sensitive work

The extra 30 seconds of redaction can save you from months of incident response.

Create a secure, expiring note →

Keywords

how to use chatgpt safely
share sensitive data with ai safely
ai privacy best practices
protect data from ai
redact sensitive data for ai
local ai models for sensitive data

Create your password-protected link now

Create password-protected links, secret memos, and encrypted chats for free.

Get Started Free
How to Use AI Tools Safely: A Practical Guide to Protecting Your Data | LOCK.PUB Blog