The Longer Look
AI-assisted, written by a non-specialist, not independently verified. Method · Corrections
30 April 2026

The Bigger Claim — More Depth Than the Government Has Published

A UK technology founder prompted four AI tools for roughly eight hours of real work, answered when they prompted back, scanned the output, and shipped a publication that contains more public-domain depth on the substantive policy question than the UK government has published. The eight hours sat inside four weeks of intensive AI-tool-assisted work that had already produced books, sites, and around 100,000 lines of code by a non-coder using the same tools. No human expert reviewed any of it before publication. A specialist reader will find errors AI review did not catch; the publication invites those corrections. That is the demonstration, with the limits named.

About this piece — read this bit first

This is a position-taking piece. It makes a claim about what this publication has produced and what that production implies about AI tools and citizen analysis in 2026. The claim is testable and the piece tries to test it. If the claim survives the testing, the implications are larger than the inheritance tax question the publication addresses.

Written by Claude (an AI tool made by Anthropic), working from Doug Scott's prompting and direction. Not edited into Doug's voice. Doug had no specialist knowledge of UK Business Property Relief, the April 2026 reform, the OBR's behavioural assumptions, or any of the technical material this publication treats in detail when he started, twelve hours before this piece was drafted. A friend sent him the policy document. That was the starting point.

Honest description of the workflow. Doug prompted four AI tools (Claude, ChatGPT, Grok, Gemini) and answered when the tools prompted back. The AI tools produced the writing, the analysis, the citations, the modelling, and the cross-critique that has shaped this publication across multiple revisions. Doug scanned the output and decided to ship. No human expert — no tax lawyer, no IFS economist, no CIOT or STEP member, no journalist with subject expertise — reviewed this work before publication. The "rounds of substantive critique" the publication has talked about are AI tools critiquing each other's output. They are not human review. Earlier versions of this publication, including earlier versions of this piece, used "architect" framing that overstated Doug's role; that framing has been corrected on 1 May 2026 across the corpus.

The claim

A UK technology founder with no prior specialist knowledge of inheritance tax has shipped, in twelve hours of prompting four AI tools, a publication that contains more public-domain depth on the substantive policy question than the UK government itself has published. No human expert reviewed the publication before it went out. That is the demonstration. The demonstration is harder to dismiss than the alternative because the human contribution is small enough to name precisely.

This is the strongest single claim the publication makes. It is not a claim about the IHT reform. It is a claim about what AI tools can produce when a non-specialist citizen prompts, answers, scans, and ships, and what that production implies for the relationship between citizens, governments, and the work governments publish.

The rest of this piece tries to make the claim survive a careful reading. If a reader can find a published government document that treats any of the substantive sub-questions below at greater depth than this publication treats them, the claim fails on that sub-question. If a specialist reader can find errors in the publication that AI review did not catch — which they will, because that is what specialist review is for — the publication invites the corrections and will publish them. The claim is not "this publication is error-free." The claim is "what AI tools can produce in twelve hours of citizen prompting, with no human expert review, is now this depth, and that is more depth than the institution that made the policy has chosen to publish."

What the government has published

The April 2026 inheritance tax reform was announced at Autumn Budget 2024. The headline allowance was raised from £1m to £2.5m on 23 December 2025 after lobbying. The reform took effect on 6 April 2026. Across that eighteen-month period, the government's published material on the reform consists of the following.

HMRC has published a tax-information-and-impact note, updated guidance pages on GOV.UK explaining the new rules, and a set of estate-count and revenue forecasts (up to 1,100 estates affected; £140m revenue in 2026-27 rising to roughly £300m; the cohort breakdown of approximately 185 APR-claiming and 915 BPR-only — of which approximately 220 are the unlisted trading-company-share subset on which the operational mechanism question primarily turns, with the remainder mainly AIM-only). HM Treasury has published a press release, a House of Commons Library briefing (CBP-10181), the legislative text in Finance Act 2026, and ministerial statements during the parliamentary process. The OBR has published a January 2025 supplementary forecast on the related non-dom reforms with a 25 per cent migration assumption for non-doms with excluded-property trusts, citing Friedman, Gronwald, Summers and Taylor (2024) at footnote 19. Written ministerial statements have provided incremental updates.

That is the sum total of the UK government's published material on the reform. Roughly twenty thousand words across the corpus. Most of it is description of what the rules now are. None of it engages substantively with the timing question — whether the tax should fall at death or at realisation. None of it contains a published behavioural model for the BPR cohort specifically. None of it contains a fiscal projection that includes indirect effects on the wider tax base from cohort retention. None of it contains the cohort-by-cohort breakdown for UK technology founders, angels, EIS investors, venture and growth-fund LPs, private-equity partners, or early employees. None of it engages with the published academic literature on the principle of intergenerational-transfer taxation. None of it presents an interactive tool through which a reader can substitute their own assumptions about behavioural response and watch the conclusions move.

This is not a criticism of the government's publishing. It is a description of what a reader looking for the substantive analysis underneath the policy choice can find in the public domain from the institution that made the choice.

What this publication contains

Eighteen pieces. Approximately fifty thousand words across the corpus. An interactive 25-year fiscal model with editable assumptions. A downloadable Excel companion with verified math across nine output combinations (three policy options × three behavioural-response scenarios). Three policy positions analysed in conditional-analysis register and one position taken in position-taking register (the principle is right; the timing is contested; a two-track design — threshold mechanism for founder equity, German-style conditional relief for operating family businesses — is the publication's most interesting policy proposal). International comparators across Australia, Canada, the United States, Germany, France, Japan, and South Korea. Engagement with the published academic literature on intergenerational wealth transfer (Holtz-Eakin, Joulfaian and Rosen; Lindh and Ohlsson; Quadrini; Bø, Halvorsen and Thoresen; Chetty et al.; Wilkinson and Pickett; Friedman, Gronwald, Summers and Taylor). A working source-quality reference for journalists distinguishing reliable claims from contested ones from misleading ones. A practitioner reference treating the technical issues — SAV mechanics, CTA 2010 s.1033 buy-backs, IHTA 1984 s.227 instalments, the s.105 trading/investment boundary, residence and domicile post-2025 — at a level practitioner explainers do not generally combine.

Pick any of the substantive sub-questions the publication addresses. The timing question (death versus realisation): the publication treats it across three pieces totalling roughly six thousand words; the government has published nothing comparable. The cohort-specific behavioural-response question: the publication treats it in conditional-analysis register with a working interactive model; the government has not published a BPR-cohort-specific behavioural assumption. The indirect-fiscal-effects question: the publication's 25-year model produces explicit central-case and sensitivity-range outputs for nine outcome combinations; no equivalent government model exists in the public domain. The two-track design proposal: the publication has named it and outlined its mechanics by reference to the Pacte Dutreil and German §13a/§13b precedents; the government has not engaged the proposal in public at all.

None of this is to say the government has not done internal work. HMRC has internal models. The Treasury has internal advice. The OBR has internal calculations beyond what its published forecast contains. None of that internal work is in the public domain. A citizen, a journalist, a tax practitioner, or a parliamentarian who wants to engage with the substantive analysis underneath the reform cannot read the government's internal work. They can read this publication. The asymmetry is the point.

What this publication is not

It is not the IFS, the Resolution Foundation, the Centre for the Analysis of Taxation, the CIOT, STEP, or any other specialist policy institution. Those institutions publish on UK tax reform. Their work is reviewed internally by economists, lawyers, and policy specialists before it is released. This publication has had no review of that kind. A reader who wants the kind of analysis specialist institutions produce should read those institutions. This publication does not pretend to substitute for them.

It is also not error-free. Three factual errors caught by AI fact-check have already been corrected openly: a misnamed co-author list on the Friedman paper (an AI hallucination of three plausible-sounding co-authors who were not on the paper), an outdated Canadian capital-gains inclusion rate (the proposed 66.67 per cent increase was cancelled in March 2025; the publication had repeated the proposed-but-cancelled figure), and a wrong threshold figure on the German optional 100 per cent relief (the publication had 10 per cent; the actual threshold is 20 per cent). The corrections are disclosed at the foot of the pieces where the errors appeared. A specialist reader engaging with the publication will find more errors of this kind. The publication invites those corrections. It will not pretend they are not there.

What this publication is, therefore, is the output of a workflow specifically: a non-specialist citizen prompting four AI tools, answering when the tools prompt back, scanning the output, and shipping. The work that AI fact-check catches gets caught. The work that human expert review would catch — methodological controversies in the Holtz-Eakin literature, the Carnegie-conjecture replication problems, contested cadences in the Wilkinson-Pickett work, current CIOT and IFS positions on the reform that the publication treats as more original to itself than they are — has not been caught. A specialist reader will see the seams. The publication invites them to point at the seams openly.

Why the depth claim is plausible — and what it does not show

An external reviewer made the point directly: "The central conceit — that one citizen with AI tools produced 'more public-domain depth on this question than the UK government has published' — is a strong claim. It's plausible specifically because HMT hasn't published much on this narrow operational question, but it isn't the same as saying the analysis is right." The reviewer is correct, and three distinctions matter.

Depth is not accuracy. The publication has eighteen pieces totalling roughly 50,000 words; the public-domain government corpus on this specific question is much shorter. Both can be true alongside: the publication contains errors AI cross-critique did not catch, and the government's published material — though shorter — has been internally reviewed in ways the publication has not. The comparator is narrow by choice. The publication is not benchmarking itself against the IFS, the Resolution Foundation, the CIOT, or STEP. Those institutions have published serious work on UK inheritance tax. A specialist who wants comprehensive analysis on the wider question should read them, not this publication. The depth claim survives only if HMT is the relevant comparator — the institution that made the policy choice has the most to gain from publishing the reasoning underneath it.

So the honest version: more public-domain depth on the substantive sub-questions than the institution that made the policy choice has published, with the qualification that "depth" is volume and breadth, not correctness, and that specialist institutions would produce work this publication does not match in rigour.

What this implies

The implication is not that AI replaces specialists, or that this kind of work substitutes for the modelling HMRC and OBR could publish if they chose to. It is a smaller and more specific implication: the threshold for what a non-specialist citizen with AI tools can ship, on a contested public-policy question, has moved. Several years ago this work would have required a small team, several months, and a budget. Now it requires one person prompting and answering for twelve hours, with the qualification that the result is what AI tools produced, that AI fact-check caught some errors and missed others, and that no human expert review took place.

People working in government, in policy think tanks, in journalism, in academia, in advocacy, in civic technology, in publishing, in any kind of long-form analytical work, are going to need to think about what this means over the next several years. The answer is not that any of those professions becomes obsolete. The answer is that the relationship between expertise, citizen participation, individual analytical output, and the tools that mediate between them is changing, and it is changing fast enough that paying attention to specific examples — like this publication, with all its named limits — is more useful than waiting for the abstract argument to settle.

The bigger question this raises for government publishing is sharper. If a citizen with no specialist knowledge can produce, in twelve hours of prompting AI tools and with no human expert review, more public-domain depth on a contested policy question than the institution that made the policy has produced over eighteen months, what does that say about how governments publish? The publication's view is that it says something about the gap between what governments could publish and what they choose to. The internal modelling exists; the choice not to publish it is a choice. That choice was defensible when the cost of producing equivalent external analysis was high. The cost has fallen. The defensibility of the choice has fallen with it.

The publication is one example, with named limits. It is not unique and it does not need to be unique to be interesting. What is interesting is what it implies for the next several years — for governments choosing whether to publish more of their internal work, for citizens choosing whether to engage with policy questions at this level of depth, and for the relationship between the two.

The test

The test the publication offers has two halves now.

First: if a reader can find a published government document — Treasury, HMRC, OBR, Cabinet Office, parliamentary committee, anywhere in the public domain — that treats the timing question, the cohort-specific behavioural-response question, the indirect-fiscal-effects question, the two-track design proposal, or the principle question (the academic case for taxing intergenerational wealth transfer at this scale) at greater depth than this publication treats them, the depth claim fails on that sub-question. The publication invites the test and would prefer to be corrected than to be wrong.

Second: if a specialist reader engaging with the publication finds errors that AI review did not catch — methodological errors in the academic citations, technical errors in the tax mechanics, framing errors that the IFS or CIOT or STEP would catch on a careful read — the publication invites the corrections and will publish them with attribution. The claim is not that AI tools plus a non-specialist citizen produces work as good as a specialist institution's. The claim is that AI tools plus a non-specialist citizen produces this much depth in twelve hours with no human expert review, and the result is worth engaging with on its merits while being honest about the limits.

Both halves of the test are open. The publication is not finished and does not pretend to be. What it has done — and what is worth saying plainly, on the front page, with the human contribution named precisely — is to demonstrate that the threshold for citizen-produced analytical work on contested policy questions has moved. The IHT reform is one example. The bigger story is what AI tools have moved within reach of citizens who care about the questions the public conversation is treating too quickly, and what that means for how governments and citizens engage with the work governments produce.

That is the actual contribution. The IHT analysis is the demonstration, with limits openly named. The demonstration is on the table.


About 2,000 words. Two corrections logged on the corrections page: the workflow-honesty correction (1 May 2026) replaced earlier "architect" framing with the truthful prompted-and-shipped version; the depth-claim narrowing (1 May 2026) added the section above after a reviewer observed that "more depth than the government has published" is not the same as "the analysis is right."