Quick Judgment
Supabase RLS not working after AI code is not a normal bug. If AI changed policies, queries, ownership columns, RPC functions, auth assumptions, or storage rules, the problem may be data exposure or data lockout. Do not launch blindly.
RLS decides which rows users can access. If it fails closed, users cannot use the app. If it fails open, users may see, edit, or delete data that does not belong to them. Both cases require review before production. If you are not sure which one you have, request a Supabase RLS review.
Why RLS is not a normal bug
A normal UI bug usually affects what the user sees. An RLS bug affects what the database permits. That means the UI may look correct while the underlying data access is wrong. A hidden button does not protect a row if the policy allows the request. An empty screen does not prove security if another query path exposes data.
AI-generated code often treats RLS as a friction point. When it sees permission errors, it may suggest broad policies, client-side filtering, service keys, or RPC functions that bypass the intended boundary. These changes can make the app appear fixed while weakening the layer that protects user data.
Deny-all failure vs over-permissive failure
A deny-all failure happens when legitimate users cannot read or write their own rows. Symptoms include empty dashboards, failed inserts, failed updates, and vague permission errors. This is frustrating, but it is usually safer than the opposite failure.
An over-permissive failure happens when users can access rows, files, or functions they should not access. Symptoms are often subtle. You may only notice when one account sees another account's records, when admin data appears in a normal user flow, or when a query works without the expected ownership condition.
AI can accidentally create either failure. It may add a policy that is too narrow, or it may "fix" errors by using true, trusting client-provided IDs, or bypassing RLS through a function.
Symptoms that suggest data exposure risk
Watch for these signs:
- A user can see records created by another user.
- Changing a URL, ID, or filter exposes someone else's data.
- Admin-only data appears in a normal account.
- Storage files are accessible through public links or shared buckets.
- RPC functions accept
tenant_id,user_id, orowner_idfrom the client. - AI added broad SELECT, INSERT, UPDATE, or DELETE policies.
- Frontend checks decide permissions while database policies are vague.
- Service-role keys appear anywhere near browser code.
If these symptoms exist, compare Supabase RLS audit before launch and AI app database or permission problem.
Also watch for false confidence. A normal user account may only see its own rows in your manual test, but that does not prove the policy is safe. The test may be using filtered frontend code, a narrow sample of data, or a path that does not cover admin records, team records, storage files, or RPC functions. RLS should be reviewed from the database boundary, not only from what one screen appears to show.
Negative tests matter. A safer review asks what user A must not read, what user A must not update, whether unauthenticated access is denied, and whether admin-only operations are separated from normal user operations. If those questions have not been answered, launch risk remains.
What not to ask AI to do blindly
Do not ask AI to "make Supabase allow the query" without specifying the security model. Do not ask it to disable RLS. Do not accept policies you cannot explain. Do not use a service-role key in client code. Do not trust a policy just because the app screen now loads.
Also avoid running unreviewed migrations against production data. RLS changes can affect every user immediately. If real users or paying customers are involved, treat policy changes as production-risk changes, not cosmetic fixes.
Safe next step
The right review should answer:
- Which tables, buckets, and functions are exposed?
- Which user owns which rows?
- Do policies use trusted identity such as
auth.uid()correctly? - Are admin, team, and tenant roles modeled safely?
- Is the current failure deny-all, over-permissive, or both?
- What should AI not touch next?
This is not a certified security audit and not a legal compliance review. It is a production risk review that helps you decide whether to fix policies, refactor the data model, rebuild the auth boundary, or stop before launch.
If AI changed Supabase RLS and you cannot prove users only access what they should, submit the app for review before launch.
Bring the table list, policy list, role model, example user flows, and any AI prompts that changed Supabase behavior. The review should turn those details into a launch decision: safe to patch, needs policy correction, needs data-model refactor, or should not launch until ownership is redesigned.