Back to Blog
implementation

Why Your AI-Built App Works in Preview But Fails in Production

By Jay MatharuPublished Last reviewed
A developer's laptop showing a working preview on one side and a failed deployment error screen on the other, representing the gap between AI tool previews and real production environments.

The problem is structural, not accidental

If your application works correctly in Bolt.new, Lovable, or Replit's built-in preview, and then fails when you deploy it to Vercel, Netlify, or your own domain, there is a structural reason for this. It is not bad luck, and it is not a random error in your specific build. It is a predictable consequence of how AI coding tools work.

AI coding tools are optimised to produce impressive, working prototypes in a controlled preview environment. That environment is configured by the tool to make your application succeed. It has the right environment variables, the right domain settings, the right CORS configuration, and the right package resolutions — set up automatically, invisibly, in a way that the tool controls. When you step outside that environment and into a real hosting platform, the invisible scaffolding does not come with you. What remains is your application, without the support structure that was making it work.

This is not a criticism of the tools. It reflects their design priority: helping you build something functional quickly. Production deployment is a separate concern, with separate requirements, that the tools are not optimised to handle.

The five failure modes

1. Environment variables are missing or wrong in the deployed environment

Environment variables are the configuration values your application needs to function: API keys, database connection strings, feature flags, third-party service credentials. In your AI coding tool's preview environment, these are set automatically or configured through the tool's interface. When you deploy the application elsewhere, they do not come with it.

On Vercel, you must manually add each environment variable in the project settings under Settings > Environment Variables. On Netlify, the equivalent is Site configuration > Environment variables. If you do not add them, your deployed application cannot connect to Supabase, cannot reach your API keys, and cannot authenticate users — and the error messages it produces often do not clearly indicate that the cause is a missing environment variable.

There is a further complication specific to Next.js applications: the distinction between build-time and runtime environment variables. Variables prefixed with NEXT_PUBLIC_ are embedded into the client-side JavaScript bundle at build time — they are available in the browser but cannot be changed without a rebuild. Variables without the prefix are only available in server-side code at runtime. AI tools frequently conflate the two, resulting in applications where the client-side code cannot reach variables it expects, or where server-side variables are accidentally exposed to the browser.

2. Your database or backend is not configured for the production domain

If your application uses Supabase, Firebase, or a similar backend-as-a-service, the backend has its own domain and CORS configuration. In the preview environment, requests come from the AI tool's own domain — lovable.app, bolt.new, replit.dev. The backend is configured to accept requests from that domain. When your application moves to a custom domain, the backend does not automatically accept requests from it.

For Supabase, this means adding your production domain to the allowed redirect URLs in the Authentication settings. For Firebase, it means adding your domain to the Authorised Domains list in the Firebase Console. For any backend with CORS configuration, it means adding your production origin to the allowed origins list. None of these steps are performed automatically when you deploy.

Row Level Security (RLS) in Supabase is a related but more serious issue. RLS policies control which database rows a given user can read or write. In a development environment, RLS is sometimes disabled or configured permissively to allow rapid iteration. A production application with permissive RLS either exposes all data to any authenticated user or locks legitimate users out of their own data. CVE-2025-48757 documented this failure at scale across more than 170 Lovable-built applications.

3. Authentication is configured for the preview domain, not your production domain

Authentication flows — particularly OAuth (sign in with Google, GitHub, or similar) and magic link email authentication — require the backend to know where to redirect the user after a successful authentication event. In your preview environment, this redirect URL points to the tool's domain. When you deploy to a custom domain, the redirect still points to the old URL — the user is authenticated and then redirected somewhere that no longer works, or is not redirected at all.

For OAuth, this requires updating the authorised redirect URIs in the OAuth provider's settings (Google Cloud Console, GitHub OAuth Apps, and so on) to include your production domain. For Supabase magic links, this requires updating the Site URL in the Supabase project's Authentication settings. For JWT-based authentication, it requires ensuring the JWT secrets and refresh token settings are consistent between environments.

These are configuration changes, not code changes — but they are not made automatically, and the error messages they produce (typically a generic "redirect_uri_mismatch" or an empty redirect) do not clearly explain where the configuration needs to change.

4. The build configuration is wrong for the target hosting platform

Different hosting platforms handle different application frameworks differently. A Next.js application deployed to Vercel requires specific configuration: the framework preset must be set to Next.js, the output directory must match the build configuration, and server-side functions must be within the file size limits Vercel enforces. A Vite application on Netlify requires different settings. A Node.js Express application on Railway requires a different set again.

AI coding tools configure your application for their own preview environment. When you export the project and deploy it to a different platform, the build configuration often needs to be updated for that platform. Common symptoms: a successful build that serves an empty page (output directory misconfigured), a 404 on every page except the root (routing not configured for single-page applications), or a build that fails with an uninformative error (framework preset mismatch).

5. Secrets are exposed in client-accessible code

The fifth failure mode is not a deployment failure — it is a security failure that becomes visible once the application is deployed and publicly accessible. AI tools, particularly when generating full-stack applications quickly, sometimes place API keys, database credentials, or other secrets in locations accessible to the browser. In a preview environment, this is invisible. In a deployed application, anyone can access these credentials by opening the browser developer tools and inspecting the application's network requests or JavaScript bundles.

Escape.tech's scan of 5,600 vibe-coded applications found more than 400 exposed secrets. The Moltbook incident (January 2026) exposed 1.5 million API tokens through this mechanism. The credentials at risk range from low-value (a public Stripe key that can be rate-limited but not used for withdrawals) to high-value (a Supabase service role key that bypasses all RLS policies and provides full database access).

A self-diagnosis checklist

Before engaging an engineer, work through this checklist. Each item takes under five minutes to check.

  1. Environment variables: Log in to your hosting platform (Vercel, Netlify, Railway). Navigate to the environment variable settings. Verify that every variable your application needs is present, correctly named, and set for the Production environment specifically (not just Preview or Development).
  2. Supabase redirect URLs: Log in to Supabase. Go to Authentication > URL Configuration. Verify that your production domain appears in the Redirect URLs list. Verify that the Site URL is set to your production domain, not a preview URL.
  3. Firebase authorised domains: In the Firebase Console, go to Authentication > Settings > Authorised Domains. Verify your production domain is listed.
  4. OAuth redirect URIs: For each OAuth provider you use (Google, GitHub, etc.), check the OAuth application settings in that provider's dashboard. Verify that your production domain's callback URL is listed as an authorised redirect URI.
  5. Build logs: In your hosting platform, find the most recent deployment and open the build logs. Read the last 50 lines. If the build succeeded but the application fails at runtime, the logs will not show the cause — but if the build itself failed, the cause is usually in the last 20 lines of the log.
  6. Browser console: Open your deployed application in a browser. Press F12 to open the developer tools. Check the Console tab for red error messages and the Network tab for failed requests (shown in red). The specific URL of a failed request is usually more informative than the error message in the application UI.
  7. Client-accessible secrets: In the browser developer tools, go to the Application tab, then Storage. Check Local Storage and Session Storage for anything that looks like an API key. Also check the Network tab: filter by fetch/XHR requests and look at the request headers — if any contain API keys as header values, those keys are accessible to the browser.

When self-diagnosis is not enough

The checklist above resolves the most common deployment failures. It does not resolve everything.

If you have worked through the checklist and the application still fails, the issue is likely in one of two places: a more complex configuration interaction (multiple issues compounding each other), or a code-level problem that requires a direct inspection of the codebase rather than the running application.

If you have been prompting your AI tool to fix the same category of error for more than two days, the application may have accumulated circular-fix technical debt — a pattern where each fix introduces a regression elsewhere because the AI does not have full context of the codebase. In this state, further prompting typically makes the situation worse rather than better.

A diagnostic audit from a senior engineer is the correct next step. It gives you an independent assessment of the actual problem, a written fix recommendation, and a scoped quote for remediation — before you commit further time or money to a direction that may not resolve the underlying issue.

The AI App Diagnostic Audit covers all five failure modes described above, takes 3 to 5 working days, and requires only read-only access to your repository. The written report tells you exactly what is wrong and what the correct path forward is.

Frequently asked questions

Why does my app work on my laptop but not when deployed?
Your laptop development environment has the same environment variables, domain settings, and package resolutions your AI tool configured. The deployed environment does not. The five most common causes are missing environment variables, database not configured for the production domain, auth configured for the preview domain, build configuration wrong for the target platform, and secrets in client-accessible code.
My Vercel deployment shows a successful build but a blank page. What is wrong?
A successful build with a blank page is almost always a build output directory misconfiguration, a framework preset mismatch, or a client-side routing issue where the server is not configured to serve index.html for all routes. Check the Vercel project settings — Framework Preset and Output Directory — against your project's actual build output.
Can I fix these issues myself without an engineer?
The checklist in this article resolves the most common deployment failures without requiring engineering knowledge. If the checklist does not resolve the issue, or if the application has been through multiple rounds of AI-prompted fixes, a professional diagnostic is the appropriate next step.
How do I know if my API keys are exposed in the browser?
Open your deployed application in Chrome, press F12, go to the Network tab, and filter by Fetch/XHR. Look at the request headers and payloads of API calls. If you can see API keys in those requests, they are accessible to anyone who inspects the page. Also check your repository for any .env file that was accidentally committed.

Related Articles

implementation

Fix, Refactor or Rebuild? A Decision Matrix for AI-Built Apps

implementation

Security Vulnerabilities in AI-Generated Apps — A UK Guide

implementation

Bolt.new vs Lovable vs Replit — What Happens After the Prototype

Ready to explore AI for your business?

Book a free 20-minute consultation. No obligation, no jargon.