Quick Answer
VibeDoctor runs a 9-tool scanning pipeline built specifically for AI-generated applications: Lighthouse performance analysis, SSL and security header checks, secret detection with Gitleaks, CVE scanning with Trivy, ESLint code quality, custom project hygiene checks, and a proprietary Vibe Checks scanner with 40+ AI-specific patterns. Early scan data shows that 94% of AI-generated apps have at least one High or Critical finding before launch. The average app scores 41 out of 100 on its first scan.
What VibeDoctor Actually Scans
Most security tools scan one thing: your source code, or your dependencies, or your live site. VibeDoctor scans all three simultaneously because production readiness is not a single-dimension problem. An app can have clean code but a broken deployment. It can have secure authentication but an expired SSL certificate. It can have no CVEs in its dependencies but a Lighthouse performance score of 24.
The VibeDoctor scanning pipeline runs two parallel workflows the moment you submit your app:
The live site pipeline launches a real Chromium browser and visits your deployed URL. It runs Google Lighthouse to measure performance, accessibility, SEO scores, and Core Web Vitals. It checks your SSL certificate for validity, expiry, and protocol version. It requests your page with HTTP headers and checks for the presence of 15+ security response headers. It detects JavaScript runtime errors and console exceptions. It crawls internal links to find broken 404s. It measures total page weight, request count, and mixed HTTP/HTTPS content.
The code pipeline clones your repository (or uses uploaded source) and runs five scanners in parallel: Gitleaks checks every file for committed secrets, API keys, and credentials; Trivy cross-references all package.json dependencies against the National Vulnerability Database for known CVEs; ESLint applies static analysis rules specifically tuned for AI code patterns; custom hygiene checks verify the presence of a .gitignore, README, test directory, and absence of a committed .env file; the Vibe Checks scanner runs 40+ checks across security, dependencies, code quality, configuration, frontend, testing, and performance categories.
All findings are normalized into a unified severity taxonomy, scored, and presented in a single report with an overall health grade, section scores, and findings organized by priority.
The Early Numbers
After scanning hundreds of AI-generated applications submitted by vibe coders in the early access period, several patterns emerge from the data that are difficult to dismiss:
| Metric | Average Across AI-Generated Apps |
|---|---|
| Overall health score on first scan | 41 / 100 |
| Apps with at least one Critical finding | 67% |
| Apps with at least one High or Critical finding | 94% |
| Apps with secrets or API keys in source code | 31% |
| Apps with no security response headers | 78% |
| Apps with Lighthouse score below 50 | 55% |
| Apps with at least one CVE in dependencies (High/Critical) | 48% |
| Apps with no authentication on at least one API route | 71% |
| Apps with zero test files | 83% |
| Apps with .env file committed to Git | 12% |
These numbers represent apps that real people built, deployed, and are using to serve real users. Some have paying customers. Many are connected to live databases containing personal information. A significant number are processing financial transactions. The fact that 67% have at least one Critical finding is not a theoretical concern. It is an active risk affecting real businesses and their users right now.
The Most Common Finding: Unprotected API Routes
The single most common Critical finding across all scanned apps is unprotected API routes: endpoints that handle user data, payments, or administrative functions with no authentication check. This pattern appears in 71% of scanned codebases.
AI tools generate routes that look correct. They import authentication libraries, reference session tokens, and call methods like getSession(). But in a significant percentage of cases, the generated code calls the auth function without checking the return value:
// What AI generates - looks authenticated, is not
export async function DELETE(req: Request, { params }: { params: { id: string } }) {
const session = await getSession(req); // called but not checked
await db.delete(items).where(eq(items.id, params.id));
return Response.json({ deleted: true });
}
// What it should look like
export async function DELETE(req: Request, { params }: { params: { id: string } }) {
const session = await getSession(req);
if (!session?.user) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// Verify the user owns this item before deleting
const item = await db.query.items.findFirst({
where: and(eq(items.id, params.id), eq(items.userId, session.user.id))
});
if (!item) return Response.json({ error: 'Not found' }, { status: 404 });
await db.delete(items).where(eq(items.id, params.id));
return Response.json({ deleted: true });
}
This is not a subtle vulnerability. Any attacker who discovers the endpoint can delete, modify, or read any record in the database without authenticating. But it is extremely easy for a non-engineer reviewing AI-generated code to miss, because the authentication call is present. The absence of the check is the problem, and absence is always harder to notice than presence.
The Second Most Common Finding: Client-Side Secret Exposure
Appearing in 31% of codebases, secrets committed to source code or exposed via NEXT_PUBLIC_ prefixed environment variables represent the second most common Critical finding. The mechanism is subtle and AI tools fall into it consistently:
# In .env.local - AI-generated by Bolt or Lovable
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGci... # This one is fine
NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY=eyJhbGci... # This one is CRITICAL
NEXT_PUBLIC_OPENAI_API_KEY=sk-... # This exposes your billing account to the entire internet
NEXT_PUBLIC_STRIPE_SECRET_KEY=sk_live_... # Anyone can now create charges on your account
The NEXT_PUBLIC_ prefix tells Next.js to include the variable in the browser bundle. Every user who visits your site can extract these values from the JavaScript that loads in their browser. Attackers scan GitHub repositories and Vercel deployments for exactly these patterns.
The Third Most Common Finding: No Security Headers
Seventy-eight percent of scanned apps ship with no security response headers. This means no Content-Security-Policy, no HSTS, no X-Frame-Options, no Permissions-Policy. Most hosting platforms (Vercel, Netlify, Railway) do not add these by default. AI tools do not generate Next.js middleware to add them. The result is that most vibe-coded apps are trivially embeddable in iframes for phishing attacks, vulnerable to a wide class of clickjacking and CSRF attacks, and send no signal to browsers to enforce HTTPS.
Adding security headers takes about 20 lines of code in a Next.js middleware file. The remediation is fast and the protection is significant. But the headers are completely invisible to anyone who does not know to look for them, which is why 78% of apps ship without them.
Performance: The Other Half of the Problem
Security findings get the most attention, but performance is the other dimension where AI-generated apps consistently underperform. Fifty-five percent of scanned apps score below 50 on Lighthouse. The median Lighthouse performance score across all scanned apps is 52.
The core reasons are structural to how AI tools generate apps:
- Bundle size is not optimized. AI tools import full libraries when only specific functions are needed. A typical AI-generated app imports Lodash, Moment.js, or similar large utilities without tree-shaking, adding hundreds of kilobytes to the JavaScript bundle.
- Images are not optimized. AI tools generate
<img>tags, not Next.js<Image>components. Large uncompressed images load without lazy loading or responsive sizing. - No code splitting. The entire application loads on the initial page request rather than splitting code by route and loading it on demand.
A Lighthouse score below 50 typically means users on mobile connections are waiting 8-15 seconds for the page to become interactive. For a commercial app trying to convert visitors, this is a significant revenue impact, not just a technical metric.
What Happens After a Scan
VibeDoctor does not just identify problems. Each finding in the report links to a detailed explanation of why the issue matters, a code example of the vulnerable pattern, and a concrete fix with corrected code. The findings are organized by severity so users know exactly where to start.
The average VibeDoctor user improves their score by 23 points on the second scan, typically done within a few days of the first. The Critical findings (secrets, unprotected routes, exposed environment variables) are almost always fixed immediately because the remediation instructions are specific and the fixes are fast.
How to Run Your Scan
Visit vibedoctor.io and enter your deployed URL or connect your GitHub repository. No credit card, no configuration files, no CI pipeline setup. The scan runs in minutes and delivers a scored report covering all the categories above. The free tier includes a full scan with no limitations on findings or sections. Sign up, scan your app, fix what it finds.
FAQ
What counts as a Critical finding in VibeDoctor?
Critical findings are issues that, if exploited, could result in unauthorized access to user data, financial accounts, or system infrastructure. Examples include hardcoded API keys, service role keys in NEXT_PUBLIC_ variables, .env files committed to Git, unprotected database-writing API routes, and SQL injection vulnerabilities. These are issues that require immediate remediation before deployment.
How long does a scan take?
A typical scan takes 3-8 minutes depending on the size of the codebase and the response time of the deployed URL. The live site and code pipelines run in parallel to minimize total scan time. Large repositories (1,000+ files) may take slightly longer.
Does VibeDoctor scan apps built with Supabase?
Yes. VibeDoctor specifically checks for Supabase-related patterns including service role key exposure (a common critical issue in AI-generated apps), missing Row Level Security indicators, and Edge Function security gaps. Supabase is the most common backend in AI-generated apps and has specific security patterns worth checking.
Can I re-scan after fixing issues?
Yes. Re-scanning after fixing findings is encouraged and shows your score improvement over time. Paid plans include continuous monitoring that automatically re-scans your app on a schedule and after every push to your main branch, so you always know your current security posture.
Are the numbers in this article based on VibeDoctor's own scan data?
The numbers reflect patterns observed across AI-generated apps scanned through VibeDoctor's platform during the early access period. They are consistent with independently published research from Apiiro, Snyk, and Veracode on AI-generated code quality trends.