Vibe coding security
Five thousand vibe-coded apps just leaked their users' data. There was no breach. There was no hacker.
RedAccess scanned the open web this month and found public S3 buckets, unprotected Supabase tables, and open API endpoints exposing medical records, Fortune 500 documents, home addresses, and hotel reservations. The AI that built the apps did not configure storage permissions, and the founders did not know to check.
There are five thousand applications on the open internet right now with real user data sitting in plain view. Not behind a login. Not behind a paywall. Not protected by anything. Anyone with a browser can read it.
There was no breach. There was no hacker. RedAccess ran a scan this month, and the data was already public. The applications were built by founders using AI vibe coding tools. The AI generated the front end, the back end, and the storage layer. Nobody configured the permissions, because nobody knew the permissions were a thing.
This is the part of vibe coding that almost nobody talks about, because it does not look like failure. The app works. Users sign up. Features load. The founder ships, posts about the launch, and turns to the next feature. The fact that the database is wide open is invisible from the outside. It is also invisible from the inside, because the AI that built the app does not produce a security report.
What the scan actually found
The exposed data falls into four buckets, and the buckets are the same across the five thousand applications in the dataset.
Public S3 buckets. Static files, uploaded user content, generated documents. Configured for "public read" because the AI wrote the upload code that way, and the founder accepted the default. Anyone with the bucket URL can list every file inside.
Unprotected Supabase tables. Tables without row-level security enabled. The most common pattern. The app reads the table through an authenticated client, so it looks like the user can only see their own rows. But the table itself has no rule that enforces that. An attacker who knows the URL can read every row from any account.
Open API endpoints. Endpoints that accept any request and return data without checking who is asking. Often these are admin endpoints or debug endpoints that the AI generated during development and never restricted. A 30-second curl request returns the user list.
Misconfigured storage permissions. A category that overlaps with the first two but covers Firebase, Cloudflare R2, MongoDB, and the other services AI tools pull from. Default to public, never explicitly secured.
What was inside this data: medical records, Fortune 500 internal documents, home addresses, hotel reservations, financial documents, identity records. Not from theoretical edge cases. From real applications that real users signed up for, in the last six months.
Why the AI does not catch this
The AI that built the application is a code generator. It is excellent at producing functional code that handles a feature: "let users sign up," "let users upload a profile picture," "let users see their order history." It produces working code on the first try most of the time.
What it does not produce is a security boundary. Security in modern apps lives in a few places: row-level security policies in your database, IAM rules on your storage buckets, rate limits on your endpoints, authorization checks inside each handler. These are configuration choices, not code, and the AI builds the code without making the configuration choices.
The result is software that looks fine from the user-facing side, runs the feature the founder requested, and quietly leaves the storage layer open behind it. The founder never sees this. The AI never flags it. The first person to discover the gap is usually a researcher running a scan, a journalist writing a story, or an attacker monetizing the data.
What most founders still do not know
Most of the five thousand founders whose apps are in the RedAccess dataset have not been notified. The scan is public, but the matching of apps to founders is not. Some will find out from a journalist. Some will find out from a customer. Some will only find out when the data shows up on a forum.
If you built an app with a vibe coding tool in the last twelve months, the relevant question is not whether you are doing security right. It is whether you have looked at all. The default configuration of every major no-code and AI-build platform is permissive, because permissive is the configuration that works for a demo. Going from "works for a demo" to "safe for real users" is a deliberate, manual step. Almost no AI build instructions include it.
A self-check you can do this afternoon
There are five questions that catch the majority of the exposure patterns above. None of them require a security background to answer.
- Are your storage buckets set to public or private? Check the console for whichever service stores your user uploads. If you see "public read" anywhere in the bucket policy, that is the first leak.
- Does your database have row-level security enabled? If you use Supabase, look at the Authentication tab and find the RLS toggle on each table. If it is off on a table that contains user data, every row is readable by every authenticated user, and often by anonymous users too.
- What endpoints exist on your API? List them. For each one, ask: does this check the user's identity before returning data? If the answer is "I do not know" or "I think the framework does it," that is the second leak.
- Are there rate limits on your sign-in and signup endpoints? Without these, a bot can spray credentials or enumerate accounts in minutes. AI tools rarely add rate limits unless explicitly asked.
- Are admin or debug endpoints still live in production? AI builds tools often add a "list all users" or "reset password" endpoint during development. These get forgotten.
If any of those five answers gives you pause, you are in the category the scan covers.
What to do if you are in it
The cheapest move is the fastest. Lock down the storage buckets and turn on row-level security today. Even if you do nothing else this week, that closes the two largest leak categories. The next move is a real review of the auth flow and the endpoint list. Most founders cannot do this themselves, and most AI tools cannot do it for you, because the AI does not know what is sensitive in your particular product.
That is the gap a security review fills. At Kingbird Solutions, the free 5-point diagnostic walks you through the same five questions above with your specific stack and your specific data model, and tells you which of the four exposure categories you are in. It takes about ten minutes. If we find something, the next step is either a written audit with findings, or a hardening sprint where we fix the gaps and hand you a clean codebase.
The pattern across the five thousand exposed apps is not that the founders were careless. It is that the AI built the parts they could see and skipped the parts they could not. The fix is to look at the parts they could not see, before someone else does.
If this helped
You can put this thinking to work directly. Run the diagnostic on a stuck product, or book a 30-minute call to talk through your situation.