Back to blog
Security
7 min

AI Coding Assistants Are Writing Insecure Code: What Developers Need to Know

GitHub Copilot and Cursor AI can introduce security vulnerabilities. Learn about 74 CVEs from AI-generated code in 2026 and how to protect your codebase.

LOCK.PUB

AI Coding Assistants Are Writing Insecure Code: What Developers Need to Know

As of March 2026, researchers have identified 74 CVEs (Common Vulnerabilities and Exposures) directly linked to AI-generated code. 35 of those were discovered in March alone. The breakdown: Claude Code contributed 27 CVEs, GitHub Copilot 4, and Devin 2.

This isn't a hypothetical risk. These are real vulnerabilities in production systems, created by AI coding assistants that developers trusted to write secure code.

The Register's March 2026 headline said it bluntly: "Coding with AI doesn't mean your code is more secure." A Stanford study confirmed that developers using AI assistants actually introduce more security vulnerabilities than those coding without AI help.

The Rise of "Vibe Coding"

There's a new term in software development: "vibe coding." It describes developers who accept AI-generated code with minimal review — clicking "accept" based on whether the code "feels right" rather than carefully analyzing it.

The problem? Security vulnerabilities don't always "feel" wrong. A SQL injection vulnerability looks like normal database code. An insecure deserialization looks like standard object handling. Cross-site scripting can hide in seemingly innocent string manipulation.

When developers accept hundreds of AI suggestions per day, thorough review becomes impossible. The code ships, the vulnerabilities ship with it.

Real Security Risks from AI Code Assistants

1. Vulnerable Code Patterns

AI code assistants are trained on public repositories — including repos full of insecure code. They learn common patterns, not necessarily secure patterns.

Common vulnerabilities AI introduces:

Vulnerability How AI Introduces It
SQL Injection Suggests string concatenation instead of parameterized queries
XSS Generates code that doesn't sanitize user input
Path Traversal Creates file operations without proper validation
Insecure Deserialization Suggests deserializing untrusted data
Hardcoded Secrets Sometimes includes placeholder credentials that look real
Weak Cryptography Uses deprecated algorithms (MD5, SHA1)

2. Your Code Becomes Training Data

On free tiers of most AI coding assistants, your code may be used to train future models:

  • GitHub Copilot Free/Individual: Code snippets used for model improvement (unless you opt out)
  • Cursor AI Free: Similar data collection policies
  • Claude Free Tier: Conversations may be used for training

This means:

  • Your proprietary algorithms could influence code suggestions for competitors
  • Sensitive business logic might appear in other developers' suggestions
  • Trade secrets embedded in code could theoretically be extractable

Enterprise tiers typically offer data protection agreements, but many developers use free tiers without understanding the implications.

3. Credential Exposure

When you use an AI code assistant, you often share context including:

  • Environment variables (sometimes containing API keys)
  • Configuration files
  • Database connection strings
  • Internal API endpoints

Even if you don't paste credentials directly, AI assistants can infer them from context or suggest code patterns that expose them.

Example vulnerability:

# AI might suggest this pattern:
import os
api_key = os.getenv("API_KEY")
print(f"Using key: {api_key}")  # Logs the secret!

4. Supply Chain Risks

AI code assistants can suggest:

  • Outdated packages with known vulnerabilities
  • Typosquatted package names (malicious packages with similar names)
  • Dependencies you didn't intend to add
  • Packages that pull in insecure transitive dependencies

A developer asking "how do I parse JSON in Python?" might get a suggestion to install a random package instead of using the built-in json module.

5. The Friday Afternoon Problem

Gartner made waves in 2026 by suggesting companies should "ban Copilot on Friday afternoons." The reasoning: tired developers at the end of the week are more likely to accept AI suggestions without proper review.

This highlights a broader issue: AI assistants are most dangerous when developers are:

  • Fatigued
  • Under deadline pressure
  • Working on unfamiliar codebases
  • Multitasking

Exactly the times when developers reach for AI help most often.

Recent Research and Findings

Georgia Tech Study (March 2026)

The most comprehensive study to date tracked CVEs specifically linked to AI-generated code:

  • 74 total CVEs traced to AI code assistants
  • Claude Code: 27 CVEs (highest due to its file system access and code execution capabilities)
  • GitHub Copilot: 4 CVEs
  • Devin: 2 CVEs
  • 35 CVEs discovered in March 2026 alone — the rate is accelerating

Stanford Research (2025)

A controlled study found developers using AI assistants were:

  • More likely to write insecure code
  • More confident their code was secure (despite it being less secure)
  • Less likely to consult security documentation

Pillar Security Report (2026)

Security researchers discovered new attack vectors in GitHub Copilot and Cursor AI:

  • Prompt injection through repository files
  • Exfiltration of code context to external servers
  • Manipulation of suggestions through strategically crafted code comments

How to Use AI Code Assistants More Safely

1. Treat AI Suggestions as Untrusted Input

Every suggestion should be:

  • Reviewed line by line
  • Tested for security implications
  • Validated against security best practices

Don't assume AI-generated code is secure because it works.

2. Use Enterprise Tiers for Sensitive Code

If you're working on proprietary code:

Product Enterprise Protection
GitHub Copilot Enterprise Code not used for training, SOC 2 compliant
Cursor AI Business Enhanced data protection
Claude Enterprise Data Processing Agreement available

The cost difference is minimal compared to the risk of code leakage.

3. Never Share Credentials with AI

Don't:

  • Paste API keys into prompts
  • Include .env files in context
  • Ask AI to "debug this connection string" with real credentials

Do:

  • Use placeholder values: YOUR_API_KEY_HERE
  • Redact sensitive values before sharing code
  • Keep credentials in separate, AI-excluded files

4. Run Security Scanning

Integrate automated security tools that catch what AI misses:

  • SAST tools (Semgrep, SonarQube) for code analysis
  • Dependency scanners (Snyk, Dependabot) for vulnerable packages
  • Secret scanners (GitGuardian, TruffleHog) for leaked credentials

Run these on every commit, especially commits with AI-generated code.

5. Create Team Guidelines

Establish clear policies for AI code assistant usage:

  • Which tiers are approved for use
  • What types of code cannot use AI assistance
  • Required review processes for AI-generated code
  • Security training requirements

6. Secure Credential Sharing for Development

When collaborating on projects that involve sensitive credentials:

Don't:

  • Share credentials via Slack, iMessage, or email
  • Commit credentials to repositories (even private ones)
  • Paste credentials into AI chat interfaces

Do:

  • Use password managers for team credential sharing
  • Use secret management tools (HashiCorp Vault, AWS Secrets Manager)
  • Share one-time credentials through encrypted, expiring links

Services like LOCK.PUB let you create password-protected notes that self-destruct after viewing — ideal for sharing database passwords, API keys, or other sensitive credentials with teammates without leaving a permanent trail.

The Path Forward

AI code assistants aren't going away. They're too useful. But the current approach — trusting AI to write secure code — is demonstrably failing.

The solution isn't to abandon AI coding tools. It's to:

  1. Treat AI code like junior developer code — it needs review
  2. Maintain security tools — automated scanning catches AI mistakes
  3. Protect your data — use enterprise tiers, don't share secrets
  4. Stay informed — security risks evolve as AI capabilities expand

The 74 CVEs discovered in early 2026 are just the beginning. As AI code assistants become more powerful and more widely adopted, the attack surface grows. Prepare accordingly.

Learn more: How to Use AI Tools Safely →

Create a secure note for sharing credentials →

Keywords

is github copilot safe for proprietary code
ai code security risks
cursor ai security
ai generated code vulnerabilities
copilot suggesting insecure code

Create your password-protected link now

Create password-protected links, secret memos, and encrypted chats for free.

Get Started Free
AI Coding Assistants Are Writing Insecure Code: What Developers Need to Know | LOCK.PUB Blog