Quick Judgment
When an app works locally but not online, the problem is not automatically "Vercel is broken" or "AI wrote bad code." Local success only proves the app can run on your machine under local assumptions. Production has different environment variables, build settings, runtimes, callback URLs, database access, file storage, and security boundaries.
If your AI-built app works on your computer but fails on Vercel, Netlify, Render, Replit, or production, diagnose the deployment layer before asking AI to rewrite the app. If the failure touches auth, database, or production data, request a deployment risk review.
Why local success does not mean production readiness
Local development is forgiving. It may use local secrets, local files, relaxed CORS, test databases, dev-only packages, and manual steps that never exist in production. A developer can restart the server, edit variables, or run commands by hand. Real users cannot.
Production is stricter. Build commands must work from a clean environment. Server code must run in the selected runtime. Environment variables must exist in the hosting provider. Auth providers must trust the production URL. Database connections must work from the deployed region. Storage paths must be public or private for the correct reasons.
AI tools often blur these boundaries. They may generate code that works in a preview but assumes local files, browser access to server secrets, or a database state that only exists on your machine.
Common broken layers
Environment variables are the first suspect. Missing API keys, wrong Supabase URLs, mixed dev/prod values, or secrets not added to Vercel can break the app online.
Build settings fail when package versions, Node versions, install commands, TypeScript settings, or framework output modes differ from local.
API routes fail when code assumes local runtime behavior, unavailable packages, long-running processes, file-system writes, or server-only variables in the browser.
Database access fails when production points to the wrong database, migrations were not applied, RLS differs, or connection limits behave differently online.
Auth fails when callback URLs, cookie settings, domains, or session handling do not match production.
CORS and storage fail when uploaded files, public URLs, signed URLs, or cross-origin requests were only tested locally.
For a related diagnosis, see AI app deployment failed.
What not to change blindly
Do not ask AI to rewrite the whole app because deployment failed. Do not remove TypeScript, disable linting, or weaken security checks just to make the build pass. Do not copy local secrets into client-side code. Do not point preview and production at the same database unless you understand the risk.
Do not change auth callbacks, RLS policies, and deployment config in one prompt. If the app starts working after a broad change, you may not know which change mattered or what risk was introduced.
When to fix deploy vs rebuild assumptions
Fix deployment when the app structure is sound and the failure is narrow: missing variable, wrong build command, unsupported runtime, unconfigured callback URL, or missing migration.
Refactor when the app works locally only because local assumptions hide a weak boundary. For example, frontend code reads a secret, server code writes to local disk, or auth depends on a development URL.
Rebuild assumptions when the architecture cannot support online operation. This happens when the app has no clear backend ownership, no environment separation, no deployable data model, or no safe production path.
The decision should be based on evidence, not frustration. A build log pointing to one missing environment variable is different from an app that assumes a local database, local storage, and manual startup commands. A production callback mismatch is different from an auth model that only works when the same person owns every record.
AI often reacts to deployment errors by changing application code. Sometimes that is correct, but often the app code is not the first layer to touch. If the deployment environment is misconfigured, a rewrite creates unnecessary risk. If the architecture depends on local-only behavior, a small config fix will not make it production-ready. The review should separate those cases before the next change.
When to request a review
Request a review when the online failure blocks launch, touches auth or database access, or keeps returning after multiple AI fixes. Also request one when the app is live with test users or real users and you cannot tell whether preview, staging, and production are separated safely.
Bring the local command that works, the production build log, runtime error, hosting provider, environment variable names, framework, database provider, and the last AI-generated changes. That evidence is usually enough to classify the failure layer without guessing.
Safe next step
Collect the build log, runtime error, hosting platform, framework, environment variable list, database provider, and whether the app has production users or data. Then decide whether the failed layer is deployment, environment, backend, auth, database, or architecture.
If the app is close to launch and the online failure is not obvious, use Get Review before asking AI to rewrite working local code.