Kingbird SolutionsKingbird Solutions

← All writing

Vibe coding security

Moltbook leaked 1.5 million API auth tokens three days after launch. The founder had never heard of row-level security.

A breakdown of how an AI-built product can ship a working authentication system and still expose every user. The pattern shows up in almost every vibe-coded app we audit.

By Chris KingMay 13, 20265 min read

Three days after Moltbook launched, one and a half million API authentication tokens were sitting on the public internet. Thirty-five thousand email addresses leaked alongside them. The founder of Moltbook had built the entire product with AI tools and had not written a single line of code in the process. The app worked. Users signed up. Features loaded. The breach existed the moment the first user touched the database.

The exposure was not the result of a sophisticated attack. It was the result of a checkbox that nobody knew to check. The AI that built the application generated a working authentication system. Users could sign up, log in, and access their data. What the AI did not enable was row-level security on the underlying database. Without row-level security, every row in every user table was readable by anyone with a valid auth token, including each user's own. The auth tokens, in turn, were exposed in the API responses because the AI included them in a debug field that was never removed.

The Moltbook founder, by their own account, had never encountered the term "row-level security." Why would they? They were not a developer. They built the product to solve a real problem, used an AI tool that promised a working application, and shipped what came out. The promise was kept. The application worked. The promise did not include "and your users will be safe."

What row-level security actually is

Row-level security is a database setting that controls who can read which rows in a table. Without it, your tables are shared. Every authenticated user, and sometimes anonymous visitors, can read every row in every table the database exposes.

A grocery analogy: imagine a deli with a list of customer orders behind the counter. Row-level security is the rule that says "each customer can only see their own order." Without that rule, every customer who walks up can read every other customer's order. The deli is still operating. The orders are still being filled. But the privacy is not there, and nobody at the deli noticed because the customers do not usually look behind the counter.

In Supabase, row-level security is a toggle on each table, plus a policy that defines who can read which rows. Both pieces have to exist. In Firebase, the equivalent is security rules. In a custom Postgres setup, it is database policies. The shape varies. The risk is the same: if the rule is not there, the data is open.

Why the AI gets auth right but skips RLS

This is the part founders have a hard time accepting. The AI built a working sign-up and login flow. How could it have skipped something as fundamental as row-level security?

The answer is that authentication and authorization are two different things, and the AI is reliable at the first one and unreliable at the second. Authentication answers "who are you." It is well-documented, well-trained, and the AI builds it correctly nearly every time. Authorization answers "what are you allowed to see." It depends on the specific shape of your data, your product's privacy model, and your business rules. The AI does not know what your privacy model is unless you describe it in detail, and most founders do not know to describe it. The result is an app where the user can log in, but once logged in, can see everything.

Compounding this, AI build tools optimize for "the demo works." A demo with row-level security enabled often fails on the first try, because the default policy is "deny everything." So the tool, or the user, disables row-level security to get past the error and never turns it back on. The application ships in a state the AI explicitly considers temporary.

A 90-second self-check

If you built your app with Supabase, open the Authentication tab in the dashboard. For every table that contains user data, check whether row-level security is enabled. If the toggle is off, the table is open. Turning it on is the first step. The second step is writing a policy that says "users can read rows where the user_id matches their own." Without the policy, turning RLS on locks everyone out, including the legitimate users.

If you built on Firebase, open the Firestore rules editor. Look for rules that say allow read: if true or allow read, write: if true. Those are open. The corrected version uses the auth context to scope reads to the user's own data.

If you built on a custom backend, the check is harder. The relevant question is: in each API handler, does the code verify the user is authorized to see this specific record before returning it? If the code only checks "is the user authenticated" and not "is the user allowed to see this row," every authenticated user can read every row.

What to do if you find it open

The cheapest move is the fastest. Lock down the most sensitive table first. Auth tokens, email addresses, payment information, anything regulated. Even if you cannot fix the entire app today, closing the most exposed table reduces the immediate risk.

The next move is auditing the rest of the schema. Most vibe-coded apps have ten to thirty tables, and the founder cannot reliably remember which ones contain user data. List them. For each one, ask whether row-level security is on and whether a policy exists. If either piece is missing, that table is part of the exposure.

After that, look at the API responses. The Moltbook breach was not only the database. The auth tokens were leaking in a debug field that the AI generated and the founder never noticed. Open your network inspector, log in as a test user, and look at the JSON your endpoints return. Anything you would not want a stranger to see is a problem.

The pattern repeats

The Moltbook breach is unusual only in scale. The pattern is identical in nearly every vibe-coded app a security review touches. The auth flow works. The user can sign in. The data is open. The founder does not know.

If your app fits that description and you have not had a security review, the gap is almost certainly there. The free 5-point diagnostic at Kingbird Solutions walks you through the four most common exposure categories, including row-level security, and tells you which ones apply to your specific stack. It takes about ten minutes. The earlier you find the gap, the cheaper it is to close.

If this helped

You can put this thinking to work directly. Run the diagnostic on a stuck product, or book a 30-minute call to talk through your situation.