Large iceberg floating in deep blue water at night - Symbolizing hidden technical debt and security risks

AI coding tools are genuinely impressive. In a matter of hours you can go from a blank screen to a working web application with a database, user authentication, and a polished interface. The speed is intoxicating, and the results are often good enough to demo, pilot, and even charge money for.

But "good enough to demo" and "safe to run in production" are very different things. The gap between them, which we think of as the missing 20%, is where the real cost of skipping professional engineering support tends to live. And that cost is rarely obvious until something goes wrong.

The 80/20 Problem in Practice

Vibe coding tools are optimised for speed and functional correctness. They produce code that works: in the happy path, on the developer's machine, with expected inputs and no adversarial users. They're not optimised for what happens when a user tries to access someone else's data, or when traffic spikes, or when a dependency has a known vulnerability, or when a regulator asks to see your data processing records.

Vibe Code (80%) UI, Features, Logic Production Ready (20%) SECURITY HARDENING CI/CD PIPELINES MONITORING & ALERTS COMPLIANCE & BACKUPS VISIBLY COMPLETE
Fig 1: The Iceberg of Production Readiness

The 80%, the working prototype, is real and valuable. The 20% is not a minor cleanup job. It's the part that determines whether your application is safe to put in front of real users, whether it will still work under load, whether it will stay up when things go wrong, and whether you'll hear about downtime from your monitoring system or from angry users on social media.

"We shipped on a Friday. By Monday morning we had 3,000 users and no idea the app had been leaking session tokens since launch."

Here's a breakdown of what typically gets missed.

Security Gaps

IDOR ATTACKS SQL INJECTION LEAKED KEYS XSS VULNS REACTIVE VULNERABILITY MITIGATION
Fig 2: Common Security Threat Vectors in AI-Built Apps

Exposed API keys and secrets

AI-generated code frequently includes API keys, database connection strings, and other secrets hardcoded directly in the frontend or committed to version control. These are trivially discoverable by automated scanners, competitors, or anyone who opens browser developer tools. Once exposed, a leaked key can mean unauthorised access to your data, unexpected charges, or complete compromise of a third-party service.

Missing or broken authentication

Vibe-coded authentication is often structurally correct but missing the edge cases that matter: no rate limiting on login endpoints (trivially brute-forced), no account lockout policies, password reset flows that can be exploited to take over arbitrary accounts, or JWT tokens that never expire. Each of these is a predictable attack vector.

Insufficient authorisation checks

This is perhaps the most common and dangerous category. An API endpoint might correctly require a user to be logged in, but fail to verify that the logged-in user is authorised to access the specific resource they're requesting. The result: any authenticated user can read or modify any other user's data by simply changing an ID in the URL. This is called an Insecure Direct Object Reference (IDOR) — and it appears constantly in AI-generated backends.

SQL injection and input validation

When AI writes database queries using string concatenation rather than parameterised queries, the application becomes vulnerable to SQL injection, one of the oldest and most well-understood attack classes in existence. Similarly, insufficient input validation can enable cross-site scripting (XSS), allowing attackers to inject malicious code into your application's output.

No CI/CD Means No Safety Net

Most vibe-coded applications are deployed manually: copy the files to a server, restart the process, hope nothing breaks. This works fine for a prototype. For a production system, it's a significant liability.

Without a continuous integration and deployment pipeline, you have no automated testing before code reaches production, no ability to roll back instantly if a deployment breaks something, no audit trail of what was deployed and when, and no confidence that the code running in production matches what's in version control.

The practical consequence: bugs that could have been caught automatically reach users. Fixing them requires another manual deployment. If the fix introduces a new bug, you do it again. Each cycle is manual, error-prone, and stressful. Teams in this situation often end up freezing changes because they're afraid of breaking something, meaning the product stops improving.

No Monitoring Means Discovering Problems the Wrong Way

Without monitoring, you find out about downtime from users. You find out about performance degradation when customers complain. You find out about errors when a frustrated support email arrives. By the time any of this reaches you, the damage to user experience, trust, and revenue has already been done.

Proper production monitoring includes uptime checks that alert within minutes of a service going down, error rate tracking that surfaces exceptions before they become widespread, performance monitoring that catches latency regressions, log aggregation that lets you diagnose issues after the fact, and alerting that pages the right person immediately. None of this appears automatically in a vibe-coded application.

Compliance Is Not Optional

For many applications, the compliance dimension is the most immediately business-critical. If your application handles personal data of EU residents, GDPR applies. If you're selling to enterprise customers, many will require SOC 2 or ISO 27001 certification before they'll sign a contract. Healthcare applications face additional regulatory requirements depending on jurisdiction.

These frameworks require documented security controls, data processing records, incident response procedures, access control policies, and regular security reviews. A vibe-coded application has none of these by default. Retrofitting them after the fact is significantly more expensive than building them in from the start.

The cost of non-compliance isn't theoretical. GDPR fines can reach 4% of global annual turnover. More commonly, the cost is a lost enterprise deal, the one that would have changed the trajectory of the business, because you couldn't pass the security questionnaire.

Scaling Challenges

Vibe-coded applications are typically built against a single database, running on a single server, with no consideration for what happens when usage grows. This works fine at small scale and starts to cause problems as traffic increases: database queries that run fine with 100 rows start timing out with 100,000; a single server becomes a single point of failure; no caching means every page load hits the database directly.

These aren't insurmountable problems, but they're much easier to design around from the beginning than to retrofit once you're dealing with live traffic and frustrated users.

The Real Cost Calculation

It's tempting to calculate the cost savings of shipping without engineering support and conclude you've saved money. The real calculation includes: the engineering time to fix a security incident after it happens (typically 10-100x the cost of preventing it), the revenue impact of downtime, the lost enterprise deals that require compliance certification, the technical debt accumulated by a system that was never designed for scale, and the cost of rebuilding when the shortcuts catch up with you.

The engineering investment to take a vibe-coded application from prototype to production-grade is a fraction of these downstream costs. It's also predictable and finite, unlike the cost of a breach, a compliance failure, or a critical system that needs to be rebuilt under pressure.

The right question isn't whether you can afford the engineering. It's whether you can afford not to have it. We've written a step-by-step guide to what production hardening actually involves; it's less mysterious than it sounds.