Kingbird SolutionsKingbird Solutions

← All writing

Vibe coding security

The four things we look for when we audit a vibe-coded app.

Every AI-built app we audit has at least one of these four gaps. Most have three. None of them require code-writing to detect. All of them require knowing where to look.

By Chris KingMay 28, 20266 min read

Every vibe-coded app we audit has at least one of four security gaps. Most have three. The reason the gaps repeat is not that founders are careless. It is that the AI tools that built the apps optimize for "the demo works," not "the user is safe," and the founder cannot tell the difference from the outside.

What follows is the list, in the order we check them. None of these checks require writing code. All of them require knowing where to look. If you have shipped an AI-built app and you have not yet run this checklist, the gap is almost certainly there.

1. Exposed auth flows

The first thing we look at is the authentication path: signup, login, password reset, session refresh. The AI tools build this path competently most of the time. What they miss is the protective layer around it.

The exposures we see most often:

  • Auth tokens leaking in API responses. The AI includes the access token in a debug field or in the user object returned to the client. Now any logged request, any browser network tab, any third-party analytics script can read it. Moltbook leaked 1.5 million tokens this way.
  • No rate limit on the password reset endpoint. A bot can enumerate emails by checking which ones trigger reset flows and which return "no such account." This is account enumeration, and it precedes most credential stuffing attacks.
  • Session tokens with no expiration. The AI generates a token, the user logs in once, and the token works forever. If a laptop is lost or a token leaks, the attacker has indefinite access.
  • Missing CSRF protection on state-changing requests. AI-built apps often skip the CSRF token entirely. A malicious page the user visits can issue requests as them.

Each of these is straightforward to fix. None of them are visible to the user, which is why founders rarely notice.

2. Misconfigured storage permissions

Most vibe-coded apps store user content somewhere: profile pictures, uploads, generated documents, exports. The storage layer almost always has default-public configuration that the AI never tightened up.

What we check:

  • Are your S3 buckets, Cloudflare R2 buckets, Supabase storage buckets configured for public read? A surprising number are, because public read is the easiest way to make image uploads work in the demo and the AI did not switch to signed URLs afterward.
  • Can the bucket be listed? Even with public read on individual files, if the bucket allows listing, an attacker can enumerate every file. List should be off.
  • Are the access keys committed to the repo or to the client bundle? AI tools occasionally inline credentials directly into the client code, especially when copying examples from documentation that uses test keys.
  • Are uploaded files served from a domain that does not enforce content type headers? This lets attackers upload an HTML file that runs JavaScript in the context of your domain. It is more obscure but it is the path to most XSS-via-upload attacks.

RedAccess's recent scan found over five thousand vibe-coded apps in this category alone. The medical records, hotel reservations, and Fortune 500 documents in that scan were sitting in misconfigured buckets.

3. Missing rate limits

Rate limits are the boring layer that nobody asks the AI to build, so the AI does not build them. We check rate limits in three places:

  • Signup and login. Without limits, a bot can spray credentials or create thousands of accounts to grief the platform or to use as launching pads for spam.
  • Password reset. Same enumeration risk as in section 1, separately checked here because the rate limit is sometimes on signup but not on reset.
  • Expensive endpoints. Anything that touches the AI itself (LLM calls, image generation, document processing) is unbounded by default. A single attacker can spend your monthly OpenAI budget in an hour. We have seen this happen to real founders, more than once.

The fix is configuration, not code. Most platforms have rate-limit primitives. Supabase, Vercel, Cloudflare, and Auth0 all expose them. They are off by default.

4. Broken row-level security

This is the largest category and the most likely to expose user data at scale. Row-level security is the rule that says "user A can only read user A's rows."

What we look at:

  • Is row-level security enabled on every table that contains user data? For Supabase, this is a toggle. For Firebase, it is security rules. For custom backends, it is a question of whether the API handler verifies ownership before returning data.
  • Are the policies actually written, or is RLS just turned on with no policy? Turning RLS on with no policy locks everyone out, which is sometimes correct and sometimes a sign that the founder enabled it during a bug fix and never wrote the access policy. Either way, we flag it.
  • Do the policies cover all access patterns? A common mistake is a policy that scopes reads to the user but leaves writes unscoped, or vice versa.
  • Do related tables share a consistent ownership model? If your posts table is scoped by user_id and your post_attachments table is not, the attachments are still public.

The Moltbook breach and the EdTech exposure we covered earlier this month were both row-level security failures.

How we run an audit

When we audit an app, the work follows the order of the four sections above. We start with the auth flow because exposed tokens are the highest-leverage attack. We move to storage because that is where bulk data lives. We check rate limits because they are cheap and they prevent budget incidents. We finish with row-level security because it is the largest category and requires the most careful walkthrough.

We do this work in three tiers:

Free 5-point checklist. Ten minutes. You answer five questions about your stack and we tell you which of the four categories above are likely exposures on your app. Good as a first filter. If nothing shows up, you sleep better. If something shows up, you have a clear next step.

Paid audit. Fixed fee. We do the full walkthrough of all four categories against your specific codebase. You get a written report listing each finding, where it is, what the exposure is, and how to fix it. Suitable for founders who want to remediate themselves or hand the report to a developer.

Secure and deploy. Fixed fee. We do the audit and then do the remediation. You hand us access, we close every gap we find, and we hand you back a clean codebase ready for production. Suitable for founders who do not want to be in the security details themselves.

Most apps need the free checklist first. Some need the audit. A smaller number need the full deploy. All three are at kingbirdsolutions.com/diagnostic. The earlier you run the first one, the cheaper everything downstream gets.

The pattern across every vibe-coded app we have audited is that the founder cared about safety but did not know where to look. None of these four gaps are hard to find if you know they exist. The hard part is knowing they exist.

If this helped

You can put this thinking to work directly. Run the diagnostic on a stuck product, or book a 30-minute call to talk through your situation.