AI-assisted, written by a non-specialist, not independently verified. Not tax, legal, or financial advice. Author has a personal interest. Method · Contact · Corrections
11 May 2026
Other questions

Why Lovable Becomes Worthless

There is a specific category of AI startup that looks like a rocket ship right now and is, in fact, a sandcastle at low tide. Lovable is the cleanest example, but the argument applies equally to Bolt, v0, Replit Agent, and the dozen other “describe an app, get an app” products that raised at unicorn valuations in 2024 and 2025. A structural argument about where value accrues in the AI stack — and where it does not.

Standing. The author is a UK technology founder and has invested directly and indirectly in hundreds of very-early-stage UK tech companies. He does not, as far as he is aware, hold any direct or indirect position in Lovable, Bolt, v0, or Replit. He has commercial interests that are affected, in directions that partly cancel and partly do not, by whether the application-layer wrapper category becomes durable or commoditises. Full disclosure on the about page.

There is a specific category of AI startup that looks like a rocket ship right now and is, in fact, a sandcastle at low tide. Lovable is the cleanest example, but the argument applies equally to Bolt, v0, Replit Agent, and the dozen other “describe an app, get an app” products that raised at unicorn valuations in 2024 and 2025. They will not exist as standalone businesses in three years. Here is why.

What Lovable actually sells

Strip away the marketing and Lovable is three things stacked on top of someone else’s model. There is a system prompt that turns “build me a recipe app” into structured instructions a frontier model can execute. There is a code execution sandbox that runs the output, shows you a preview, and lets you iterate. And there is a deployment pipeline that takes the resulting app and pushes it somewhere a customer can actually visit. The model — Claude, usually — does the hard part. Lovable does the wrapping.

This is a real product. It solves a real problem. The problem is that none of the three layers is defensible, and the company that owns the underlying model can replicate all three of them in an afternoon.

The wrapper problem

Every wrapper company eventually faces the same question: what stops the model provider from eating you? For most of software history this question had good answers. The cloud provider doesn’t want to be in the CRM business. The database vendor doesn’t want to build dashboards. Specialisation was protective. AI labs do not respect this boundary, because their product is general-purpose intelligence and every vertical application is, from their perspective, just a prompt and some UI.

Anthropic shipped Artifacts. OpenAI shipped Canvas. Both let you generate working applications inside the chat interface, with previews, with iteration, with no separate subscription. The labs are not pretending they will leave the application layer alone. They are actively colonising it, because the application layer is where the revenue lives and the model layer is racing toward commodity pricing. Lovable’s entire surface area is now table stakes inside ChatGPT and Claude.

The capability problem

Lovable’s second moat is supposed to be that it does this one thing better than a general chatbot does. The prompting is tuned. The scaffolding is opinionated. The output is more reliable. This is true today and will be untrue within twelve months.

Frontier models improve on a curve that flattens specialised wrappers. The gap between “Claude in a chat window” and “Claude inside Lovable” is currently maybe a 20% reliability difference on app generation tasks. Next year it will be 5%. The year after that, Claude’s native interface will be better at building apps than Lovable is, because Anthropic has more telemetry, more compute, more researchers, and a direct financial interest in closing that gap. Lovable’s engineering team, however talented, cannot outrun a frontier lab on the lab’s home turf.

The substitution problem

Even setting the labs aside, Lovable faces a second commoditisation pressure from below. Open-weight models — Llama, DeepSeek, Qwen, whatever ships next quarter — are now good enough to build simple CRUD apps. The scaffolding required to wrap them is open source. Within a year, a competent developer can self-host a Lovable equivalent on a $200/month GPU instance and pay nothing per generation. The Kigali teenager I keep thinking about can build and sell a Lovable clone targeted at their local market, in their local language, at a tenth of the price, with margins Lovable cannot match because Lovable is paying retail API rates to Anthropic.

This is the part the current valuations do not price in. Lovable is not just competing against the labs above it. It is competing against every developer who can string together an open-weight model, a code sandbox, and a Vercel deploy hook. That competition arrives faster than people expect because the AI itself accelerates the cloning.

The user problem

There is one more layer, and it is the one that ultimately decides this. Lovable’s customers are not loyal. They cannot be loyal. The product is a thin layer over a model, and the customer knows it. The moment a competitor offers the same thing for half the price, or the moment ChatGPT does it for free, the switching cost is approximately one signup form. Software-as-a-service businesses survived on switching costs — your data is in our system, your team is trained on our UI, your integrations point at our endpoints. Lovable has none of these. The output is an app you own and host yourself. There is nothing to switch away from because there was never anything locking you in.

What the cap table assumed

Lovable raised on a story about a new category of software creation, with the implicit assumption that being early to that category would compound into a durable position. This is the SaaS playbook applied to a market that does not have SaaS dynamics. Figma got durable because design files lived in Figma and teams collaborated inside it. Notion got durable because companies put their knowledge into it and could not easily extract it. Lovable produces exports. The whole point of the product is that you leave with something. There is no equivalent of the Notion workspace or the Figma file pulling you back in.

The valuation only makes sense if you believe Lovable becomes the default interface for non-technical people building software, and stays that way, and the labs decide not to compete, and open-source stays behind, and switching costs somehow materialise. Each of those is a coin flip. Multiply them together and you get a low single-digit probability supporting a multi-billion dollar valuation.

What happens next

The endgame is not dramatic. Lovable does not collapse. It gets quietly outcompeted on three fronts simultaneously — by the labs above it bundling the same capability into their flagship products, by open-source clones below it offering the same capability for free, and by vertical specialists carving out the segments where domain knowledge matters more than generic app generation. Revenue plateaus. The next funding round happens at a flat valuation, then a down round. The team is acquired by one of the labs for the talent. The product is sunset eighteen months later.

This is what commoditisation looks like in practice. It is not a crash. It is a slow leak in a market that briefly believed wrappers could be platforms.

The broader lesson

The interesting question is not whether Lovable specifically survives. It is what the existence of products like Lovable tells us about where value accrues in the AI stack. The answer, increasingly, is not at the application layer for general-purpose tools. Value accrues at the model layer if you can stay at the frontier, at the infrastructure layer if you sell compute, and at the user layer if you bring domain expertise the model does not have. The middle — generic wrappers selling access to someone else’s intelligence with a prettier interface — is the worst place to be. It is too far from the model to capture the margin and too far from the user to capture the loyalty.

Lovable is not the villain of this story. It is the canary. The same logic flattens every company whose pitch deck includes the phrase “we use AI to…” without a serious answer to the question of what happens when the AI does the using itself.

For the companion structural argument about productivity and the cohort the UK tech ecosystem depends on, see The Race Against Itself. For the argument about why venture economics are themselves a particular kind of bet, see Venture Capital Is Good for Society and Bad for Most Founders and The 33%. For the publication’s standing on these questions, see the about page.