Lovable Left the Door Open For 48 Days.
Lovable Left
the Door Open.
For 48 Days.
A $6.6 billion AI vibe-coding platform left tens of thousands of developer projects wide open — source code, database credentials, customer data, AI chat histories — readable by anyone with a free account. It sat unfixed for 48 days.
Lovable built its $6.6 billion valuation on a simple promise: anyone can build a real app just by describing it. Founders. Teachers. Solo creators. No engineers required. It worked. Then a security researcher made a free account, typed five API calls, and walked into someone else’s source code, their database, their customer records, and every conversation they’d ever had with the AI. The door had been open for 48 days.
What unfolded around Lovable between March and April 2026 isn’t just the story of one platform’s security failure. It’s a stress test of the entire vibe-coding movement — and the results should make anyone who has ever shipped an AI-generated app stop and check their permissions.
How a Free Account Unlocked Everyone’s Data
The Lovable vulnerability has two distinct layers. The first was discovered in early 2025 and assigned CVE-2025-48757. The second — arguably worse — was disclosed publicly on April 20, 2026 by researcher @weezerOSINT. Both stem from the same root cause: Lovable’s AI generates code that looks functional, ships smoothly, and is completely exposed underneath.
Strike One — The Database Was Left Wide Open on Hundreds of Apps
Most Lovable apps are powered by Supabase, a backend-as-a-service platform built on PostgreSQL. Supabase has a security feature called Row Level Security (RLS) — policies that control which users can read or write which rows of data. If RLS is not configured, the database is essentially public to anyone who knows the API key.
The problem: Lovable’s AI was consistently generating apps without properly configuring RLS. In many cases, the Supabase anon_key — a public-facing API key — was embedded directly in the client-side code. With that key and no RLS, anyone could query the database directly and retrieve everything.
// What Lovable's AI generated (WRONG) function checkAccess(user) { // Logic is INVERTED — blocks logged-in users, allows anonymous if (user.isAuthenticated) { return false; // ← blocks real users } return true; // ← lets attackers in } // What it should have been function checkAccess(user) { return user.isAuthenticated; // allow only logged-in users }
Security engineer Matt Palmer discovered this in March 2025 while testing a LinkedIn profile generator built on Lovable. He removed an authorization header, resent the request, and received the platform’s entire user database. He reported it privately. Lovable’s response was to delete their own acknowledgment tweets and deny the issue.
A subsequent scan of 1,645 apps from Lovable’s public Discover page found 303 vulnerable endpoints across 170 projects — roughly 10% of all apps analyzed. Exposed data included Google Maps API tokens, Gemini API keys, eBay authentication tokens, full user databases, payment records, Stripe credentials, and subscription details.
With no credentials beyond a browser: view all user records, delete accounts, change credit balances, send bulk emails to 18,000+ users, access course grades and submissions, retrieve payment data — all without logging in.
Strike Two — The Platform Itself Was Broken for 48 Days
While the RLS issue affected individual apps built on Lovable, the second vulnerability struck the Lovable platform itself. On April 20, 2026, researcher @weezerOSINT published findings that exposed a critical Broken Object Level Authorization (BOLA) flaw in Lovable’s own API.
BOLA is ranked #1 on the OWASP API Security Top 10. It occurs when an API confirms you’re logged in, but never checks whether you actually own the resource you’re requesting. In Lovable’s case, the /projects/{id}/* endpoints verified Firebase authentication tokens — but skipped ownership checks entirely.
Translation: create a free Lovable account, make five API calls, read anyone else’s project.
Source Code
Full project source code for every project created before November 2025, including hardcoded API keys, database credentials, and business logic.
Database Credentials
Supabase credentials embedded in source code, providing direct database access — customer records, payment data, PII, API tokens for third-party services.
AI Chat Histories
Every conversation a developer ever had with Lovable’s AI — including pasted error logs, business logic discussions, and credentials shared mid-session.
Customer Data
Real names, job titles, LinkedIn profiles, and Stripe customer IDs from end users of apps built on the platform — not just developers, but their users too.
The researcher demonstrated the severity by accessing the admin panel of Connected Women in AI, a Danish nonprofit with over 3,700 developer edits in 2026 — an actively maintained project. From its source code, they extracted Supabase credentials and queried the live database, pulling real employee records from Accenture Denmark and Copenhagen Business School.
“This is not hacking. This is five API calls from a free account.”— @weezerOSINT, April 20, 2026
From Discovery to Public Disclosure — A Year of Silence
Matt Palmer identifies the RLS vulnerability in Linkable, a Lovable-built LinkedIn profile generator. Unauthenticated database access confirmed.
Palmer reports the vulnerability privately. Lovable acknowledges receipt — then deletes their acknowledgment tweets and denies the issue publicly.
A Palantir engineer independently discovers and publicly tweets the same vulnerability, demonstrating live extraction of debt amounts, home addresses, and API keys.
Lovable ships “Lovable 2.0” with a new security scanner. The scanner flags the presence of RLS — but not whether it actually works. Researchers call it a false sense of security.
After a 45-day disclosure window with no meaningful fix, CVE-2025-48757 is formally published. 170 apps confirmed vulnerable across 303 endpoints.
The Register reports on a Lovable-hosted EdTech app with 16 vulnerabilities, 6 critical, exposing 18,000+ records from teachers and students at major universities.
@weezerOSINT files a BOLA vulnerability report on HackerOne. Lovable triages it, ships ownership checks for new projects, and quietly leaves all pre-existing projects exposed.
Researcher files a second report documenting additional affected endpoints. Lovable marks it as a duplicate and closes it. Pre-November 2025 projects remain wide open.
@weezerOSINT goes public. Lovable responds: “We did not suffer a data breach.” Attributes the issue to unclear documentation and calls code visibility “intentional behavior, consistent and by design.”
Under public pressure, Lovable issues a second statement apologizing for the first. Admits the February regression was their own coding mistake, that HackerOne partners incorrectly closed the reports, and that the fix was only deployed after public disclosure — 48 days after it was first reported.
AI Writes the Code. Nobody Checks the Locks.
The Lovable story isn’t unique. It’s the canary in the coal mine for an entire generation of AI-built software. Veracode found that 45% of AI-generated code contains security vulnerabilities. The DORA report found a 7.2% decrease in delivery stability for every 25% increase in AI code usage.
The underlying problem is architectural. Most AI app builders — Lovable included — generate a frontend that talks directly to a hosted backend like Supabase or Firebase using an API key baked into the client. That architecture can be made secure, but it requires explicit configuration of access policies. The AI doesn’t do that by default. And the developers who chose AI tools specifically because they’re not security experts are exactly the people least likely to catch it.
Researcher Taimur Khan coined the term “vibe hacking” to describe how less technically-minded attackers can exploit AI-generated code. Because AI defaults to functionality over security, and because vibe-coded apps all share similar patterns, a single technique works across hundreds of apps simultaneously. You don’t need to be a hacker. You need to know how Lovable works.
Former Facebook CSO Alex Stamos put the client-direct-to-database pattern bluntly: “You can do it correctly. The odds of doing it correctly are extremely low.”
Lovable’s initial response to the April 20 disclosure was to place responsibility on users — their CISO had previously told The Register: “It is at the discretion of the user to implement our security recommendations.” The security community was blunt in return: “Telling developers to review security before publishing doesn’t work when those developers chose AI tools because they’re not security experts. That’s the whole point.”
After significant public pressure following the April 20 disclosure, Lovable issued two statements on X. The first, posted hours after the story broke, attempted to contain the damage:
“We were made aware of concerns regarding the visibility of chat messages and code on Lovable projects with public visibility settings. To be clear: We did not suffer a data breach. Our documentation of what ‘public’ implies was unclear, and that’s a failure on us.
Specifically for public projects, chat messages used to be visible — this is now no longer possible. When it comes to code of public projects: That is intentional behavior. We have experimented with different UX for how the build history is surfaced on public projects, but the core behavior has been consistent and by design. Importantly, for enterprise customers, being able to set visibility to public for new projects has been disabled since May 25, 2025.”
Three things stand out about this statement. First, the explicit denial — “We did not suffer a data breach” — was a direct pushback against the researcher’s framing and every headline that followed. Second, it attributed the problem entirely to documentation confusion, placing implicit responsibility on users who misunderstood what “public” meant. Third, it described code visibility as “intentional behavior” and “consistent and by design” — language that would be significantly walked back within hours.
The community response was swift and negative. Security researchers pointed out that calling a 48-day unfixed disclosure a documentation problem — when the underlying cause was a Lovable-introduced regression — was at best incomplete and at worst misleading. Under sustained pressure, Lovable issued a second statement the same day:
“We’re sorry our initial statement didn’t properly address our mistake. Here’s what a public project on Lovable means, and how we got to where we are today: In the early days, people didn’t know what Lovable was capable of. So we wanted to make it easy to explore what others were building, as a way to spark ideas and lower the barrier to getting started. Like scrolling GitHub or Dribbble: you browse projects to see what’s possible, then go build your own.
When you create a project on GitHub, you can make it private or public. Lovable worked the same. Users had a ‘Public’ or ‘Private’ option right in the chatbox. A public project meant the entire project was public, both chat and code. ‘Just like a public project on GitHub,’ we thought. Over time, we realized this was confusing. Many users thought ‘public’ just meant others could see their published app, not the chat of an unpublished project. That’s reasonable.
On the free tier, users originally couldn’t create private projects. They had to upgrade to a paid plan to do so. In May 2025, we changed this: users on the free tier could choose to make their projects private. For enterprise customers, the public visibility setting was disabled altogether. And in December 2025, we switched to private by default across all tiers. We also retroactively patched our API so public project chats couldn’t be accessed, no matter what.
Unfortunately, in February, while unifying permissions in our backend, we accidentally re-enabled access to chats on public projects. This was reported through our vulnerability disclosure program (via HackerOne). Unfortunately, the reports were closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour. Upon learning this, we immediately reverted the change to make all public projects’ chats private again. We appreciate the researchers who uncovered this. We understand that pointing to documentation issues alone was not enough here. We’ll do better.”
The contrast between the two statements tells you more than either one alone. The first said “not a data breach” and blamed documentation. The second apologized for the first, admitted the regression was their own mistake, and acknowledged their disclosure system failed. What changed between them was public pressure — not new information. Lovable knew about the HackerOne reports when they wrote the first statement.
What neither statement addresses: the 170+ apps with misconfigured RLS that remain exposed. The free-tier users who couldn’t make their projects private before May 2025 and whose data was structurally public for months without meaningful notice. And the question of whether a platform designed to default to public-by-design for cost reasons should be hosting apps that handle sensitive user data at all.
If You’ve Ever Built or Used a Lovable App, Do This Today
If you have ever built anything on Lovable, or if you’ve used any app built on Lovable — especially one created before November 2025 — take these steps immediately:
- Rotate all database credentials. Any Supabase API keys, database URLs, or service role keys embedded in a pre-November 2025 Lovable project should be considered compromised. Rotate them now, before anything else.
- Audit your AI chat history. Lovable stores every conversation. If you ever pasted a database URL, an API key, a password, or any business logic mid-session, assume that conversation has been read.
- Check your RLS configuration. Use Symbiotic Security’s open-source Vibe-Scanner — 62 detection rules against your Supabase RLS setup, no sign-up required.
- Set projects to private. If you’re on a paid Lovable tier, set all sensitive projects to private immediately. Free-tier users cannot do this — which is a separate problem.
- Rotate credentials regardless of Lovable’s fix. Lovable says they reverted the February regression that re-exposed chat histories. That’s positive — but it doesn’t undo 48 days of exposure. Any credentials, keys, or sensitive data in your chat history during that window should be treated as compromised and rotated at the source.
- Move your backend. If your Lovable app handles real user data, consider decoupling your backend entirely. Use Lovable only for the frontend and manage your backend, auth, and data access through a framework where you control the security posture.
This is not a Lovable-only problem. Cursor, Bolt, Replit, and every other AI coding assistant can generate code with the same class of vulnerabilities. Before shipping any AI-generated app that handles real user data: manually verify RLS policies, never hardcode secrets in client-side code, and treat every AI-generated auth function as guilty until proven innocent.
The People Hurt Most Are the Ones Who Never Heard of Lovable
Lovable’s $6.6 billion valuation was built on democratizing software development. That mission is real and valuable. Millions of people who couldn’t previously build software now can. But democratizing development without democratizing security awareness creates a new class of risk — not for the platforms, but for the end users of apps built on them.
The 18,000 users exposed in that EdTech app didn’t choose Lovable. They signed up for a course platform. The professionals whose records were pulled from Connected Women in AI didn’t know their data sat on a platform with an unfixed BOLA bug for 48 days. The accountability chain breaks down exactly where the users are least able to defend themselves.
The fix isn’t complicated technically. Secure-by-default configurations. Mandatory RLS. Ownership checks on every endpoint. Automatic secret rotation. Post-mortems that are public, not deleted. These are solved problems. The question is whether AI coding platforms will implement them before regulators force their hand.
“Security is bolted on after the damage is done. Ship secure-by-default configurations. Defaults should follow least-privilege principles so new projects start from a safe baseline.”— Symbiotic Security, March 2026
The liability dimension of this breach is equally significant — and legally novel. The BOLA vulnerability exposed not just user data but something courts have not yet fully grappled with: the complete AI conversation histories developers shared with the platform, including privileged business logic, credentials, and in some cases what may constitute attorney work product.
“Existing case law establishes that there is no privacy in what a user communicates to a chatbot, which is consistent with the broader principle that consumers generally lack a reasonable expectation of privacy in digital services where providers can access user inputs. More notably, entering attorney work product into a chatbot may result in a waiver of attorney-client privilege unless — perhaps — it is done at the direction of counsel or within a controlled, private AI system.
This situation represents a straightforward extension of that principle: information that was already ‘public’ in a legal sense has now become public in a practical sense. In this context, Lovable may have a pretty significant lawsuit on their hands, arriving at a particularly inopportune moment as Claude advances with its latest updates.”
Rodman’s framing cuts to something most technical post-mortems miss: the legal exposure here isn’t only about user records or API keys. Every developer who pasted sensitive business discussions, client information, or strategic context into Lovable’s AI chat window — trusting it was private — may now be staring at a privilege waiver problem on top of a data breach. Lovable has since issued a statement acknowledging the regression was their own mistake. But acknowledgment and remediation are not the same thing, and the window during which that data was accessible was 48 days long.
Every app that promises to ship in minutes carries a hidden cost. This week, we found out what it is.
