Abstract digital security visualization with code and lock symbols

AI-generated code now accounts for nearly half of all new code pushed to GitHub. The number is projected to reach 60 percent by the end of the year. Vibe coding has moved from experiment to mainstream faster than anyone predicted. But alongside the speed and accessibility, a less visible trend is accelerating: security vulnerabilities in AI-generated code are growing at an alarming rate.

In January 2026, six CVEs were directly attributed to AI-generated code. In February, fifteen. In March, thirty-five. That is not a gradual increase. It is an exponential curve, and it shows no sign of flattening.

This is not a reason to stop using AI coding tools. It is a reason to understand what they produce and where the gaps are. Because the vulnerabilities are not random — they follow patterns, and those patterns are predictable and fixable.

Why AI-Generated Code Has Security Blind Spots

AI coding tools are trained on vast quantities of open-source code. That training data includes tutorials, demos, prototypes, and example projects — code that was written to illustrate concepts, not to run in production. When an AI tool generates a login system, it draws on thousands of authentication implementations. Some of those implementations are production-grade. Many are not.

The result is code that works correctly from a functional perspective but carries security assumptions that belong in a tutorial. Placeholder secrets left in configuration files. Permissive CORS policies that allow any origin. Database queries that concatenate user input instead of using parameterised statements. Logging that captures sensitive data. Error messages that expose internal system details.

None of these are bugs in the traditional sense. The code runs. The features work. The security issues are invisible until someone looks for them — or until an attacker finds them first.

The Five Risks That Keep Appearing

After reviewing hundreds of vibe-coded applications, we see the same categories of vulnerability appearing repeatedly. Understanding these patterns is the first step toward building securely with AI tools.

Risk 1: Exposed Secrets and Credentials

AI tools frequently generate code with hardcoded API keys, database connection strings, and authentication tokens. In training data, these are placeholders — strings like sk-test-1234 or password123. In practice, developers replace these with real credentials and forget to move them to environment variables.

Across one recent study of thousands of vibe-coded apps, researchers found over 400 exposed secrets. Some were test keys. Many were production credentials with full access to payment processors, databases, and cloud infrastructure.

Fix: Use environment variables for all secrets from day one. Add a .env file to .gitignore before your first commit. Run a secrets scanner like gitleaks or trufflehog in your CI pipeline.

Risk 2: Injection Vulnerabilities

SQL injection is one of the oldest and most well-understood vulnerabilities in software. It is also one of the most common in AI-generated code. When an AI tool generates a database query, it often concatenates user input directly into the query string rather than using parameterised queries or an ORM.

The same pattern appears in command injection, where user input is passed to shell commands, and in cross-site scripting (XSS), where user-supplied content is rendered in HTML without sanitisation.

Fix: Always use parameterised queries or an ORM for database access. Never pass user input to shell commands. Sanitise all user-generated content before rendering it in HTML. These are non-negotiable production requirements.

Risk 3: Broken Authentication and Authorisation

AI-generated authentication code often implements the happy path correctly — users can sign up, log in, and access their data. What it frequently misses are the edge cases that attackers exploit: session tokens that never expire, password reset flows that do not verify identity, API endpoints that check whether a user is authenticated but not whether they are authorised to access that specific resource.

The distinction between authentication (who are you?) and authorisation (what are you allowed to do?) is subtle, and AI tools frequently conflate the two.

Fix: Use an established authentication library or service rather than generating auth code from scratch. Implement role-based access controls. Test every endpoint with different user roles to verify authorisation is enforced correctly.

Risk 4: Misconfigured Infrastructure

When AI tools generate deployment configurations, they optimise for getting the application running. Security headers, CORS restrictions, rate limiting, and TLS configuration are treated as optional extras. The result is applications that are publicly accessible with overly permissive configurations — open CORS policies, missing Content-Security-Policy headers, and APIs with no rate limiting.

Fix: Add security headers to every response. Restrict CORS to your actual domains. Implement rate limiting on all public endpoints. Use TLS everywhere. These are one-time configurations that protect against entire classes of attack.

Risk 5: Excessive Data Exposure

AI-generated APIs frequently return more data than the client needs. A user profile endpoint might return the full database record — including hashed passwords, internal IDs, email verification tokens, and metadata that should never leave the server. This happens because the generated code queries the database and returns the result directly, without selecting specific fields or filtering sensitive data.

One high-profile breach earlier this year exposed 1.5 million API tokens and 35,000 email addresses from a vibe-coded application that returned full database records through its API.

Fix: Define explicit response schemas for every API endpoint. Never return raw database records. Use a serialisation layer that whitelists which fields are included in each response.

A Security Checklist for Vibe-Coded Apps

You do not need a security team to address these risks. Most of the fixes are straightforward and can be implemented in a day. Here is a practical checklist.

  1. Audit your secrets. Search your codebase for hardcoded API keys, passwords, and connection strings. Move them to environment variables. Add a secrets scanner to your CI pipeline to prevent future leaks.
  2. Review your database queries. Check every database interaction for string concatenation with user input. Replace with parameterised queries or an ORM.
  3. Test your authentication. Try accessing resources as different users. Verify that users can only see and modify their own data. Check that sessions expire, password resets require verification, and tokens are rotated.
  4. Add security headers. Implement Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, and Strict-Transport-Security on every response.
  5. Lock down your API responses. Audit every endpoint to ensure it returns only the data the client needs. Remove internal fields, hashed passwords, and system metadata from all responses.
  6. Restrict CORS and rate limit. Set CORS to allow only your actual domains. Add rate limiting to prevent abuse. Both are simple middleware configurations in most frameworks.
  7. Enable dependency scanning. AI tools pull in packages without auditing them. Run npm audit, pip audit, or equivalent for your language. Automate this in CI.

Security Is Not a Phase — It Is a Practice

The biggest misconception about security in vibe-coded applications is that it can be addressed once and forgotten. Security is not a checkbox. It is a continuous practice that needs to be embedded into your development workflow.

Every time you add a feature, you potentially introduce a new attack surface. Every time you update a dependency, you potentially introduce a known vulnerability. Every time you change an API endpoint, you potentially expose data that was previously protected.

This is not unique to AI-generated code — it is true of all software. The difference is that AI-generated code can accumulate these issues faster because the development velocity is higher. When you can build features in hours instead of weeks, the security review cadence needs to match.

The speed of vibe coding is an advantage, but only if your security practices can keep pace. Fast development without security review is just fast exposure.

When to Bring in Help

The checklist above covers the fundamentals. If you are handling sensitive data — user credentials, financial information, health records — or operating in a regulated industry, the stakes are higher. A professional security review can identify vulnerabilities that automated tools miss, test business logic flaws that are unique to your application, and provide a security posture that satisfies compliance requirements.

The cost of a security review is a fraction of the cost of a breach. For early-stage products, a focused review of authentication, authorisation, and data handling is usually sufficient. As you scale, expand to include infrastructure security, penetration testing, and ongoing vulnerability management.

Vibe coding has made building software accessible to more people than ever before. The security knowledge that traditionally came with years of engineering experience now needs to be made equally accessible — through better tooling, better defaults, and expert support when the stakes require it.

MJ
Mark Jones
Founder, Diffian

Mark has spent a decade helping product teams ship software safely — from early-stage startups to enterprise engineering organisations. Diffian exists to bring that same rigour to the generation of products being built with AI coding tools.