How This Was Made — Methodology
How the analysis was produced. The architect-and-builders framework, the iteration history, the disclosures.
About this work
Doug Scott is not a lawyer or an accountant. He is a founder. A friend shared a policy document about the April 2026 inheritance tax reform with him, and he decided to see what AI tools could do with it. He prompted four AI tools — Claude, ChatGPT, Grok, Gemini — across multiple parallel sessions with simple continuation-style cues, and answered when the tools prompted back. The AI tools produced the writing, the analysis, the citations, and the cross-critique. Doug scanned the output and decided to ship. No human expert reviewed any of this work before publication. The instructions he gave were simple ones, repeated across the work: be factual, be truth-seeking, do not flinch from where the evidence leads. The goal he set was that all of the information should be in the public domain and every argument tested, so that a government — and the citizens it serves — can make the decision in the long-term benefit of the country. This publication is the result.
What it is and is not. It is the product of a non-specialist, working with AI tools, on a question that affects him directly. It is not a legal opinion. It is not financial advice. It is not an HMRC, HMT, or Treasury document, and the policy-paper format borrowed from HMT does not mean what it would mean coming from an official source. The author owns shares in unlisted UK companies and would be affected by parts of what is discussed. Readers should weigh the analysis with that knowledge.
What the work tries to do. Get more information into the public conversation than is currently there, in more registers than the public conversation usually carries, with every assumption visible and every argument engaged with on its strongest terms. If parts of the analysis are wrong, the author would rather be corrected by readers who know more than he does than carry the errors forward. The work is published under CC BY-NC. Share it, translate it, build on it, refute it.
Who this is for. A reader interested in the early methodology framing. The iteration history and the disclosures are honestly described; the "architect-and-builders" framing is now retracted in favour of the more truthful workflow set out in section 2. The twelve-hours piece is the more honest meta-page; this one is preserved with corrections so the trail of what changed is visible.
For the cleaner version of this story, see Twelve Hours, Four AI Tools, One Founder.
About this piece
This is a methodology piece. It explains how Article One — the analysis of the reformed UK inheritance tax regime as it applies to unlisted UK tech trading-company shares and the cohort that holds them (founders, angels, VCs, growth equity, EIS investors) — was made. It is not an argument about inheritance tax. It is an argument about how analysis on contested policy questions can be done when the analyst is personally affected by the outcome and is working with AI tools.
The piece is published because the iterative process by which Article One was produced is unusual and worth showing. The publication's stance is that the work is more trustworthy when the work is shown.
Author. Doug Scott, founder and ex-CEO of Redbrain.com, prompting four AI tools (Claude, ChatGPT, Grok, Gemini) and answering when the tools prompted back. The AI tools produced the writing, the analysis, the citations, and the cross-critique. Doug scanned the output and decided to ship. No human expert reviewed any of this work before publication.
Author's interest. The effects of this reform do affect the author and his family directly. He is trying to be impartial to the best of his abilities. The reader should weigh the analysis with that knowledge.
Note on earlier framings. Earlier versions of this piece described the human role as "architect" who "judged every draft" and made "the substantive judgments" — language repeated across the publication. The truthful version is smaller than that and is set out openly in section 2 below. The corrected workflow is: Doug prompted, Doug answered when the tools asked, Doug scanned, Doug shipped. The AI tools produced the rest. No human expert review took place. The publication apologises for the earlier overclaim and has corrected it across the corpus on 1 May 2026.
1. Why publish this
Most published policy analysis presents itself as the product of a single mind working in a clean room. Drafts, dead ends, critique-and-revision, and the personal interests of the author are typically removed before publication. The reader sees the finished surface and infers that the analysis arrived in that form.
Article One was not produced that way, and pretending otherwise would be a small but real dishonesty. It went through five substantive revisions in response to AI cross-critique. Three of those revisions changed the article materially. Two changed the publication's stance about what the article was. Across the whole process, four large-language-model AI tools — Claude, ChatGPT, Grok, and Gemini — were prompted across multiple parallel sessions to draft, structure, and critique each other's output. The fact that the reform affects the author and his family directly was a constant pressure on the analysis throughout.
This piece exists to show the work, on the principle that a reader who knows how an analysis was made can decide for themselves how much weight to give its conclusions.
2. The honest description of the workflow
The truthful version of how Article One was produced is smaller than the framings the publication has previously used. It is small enough to fit in one paragraph.
Doug Scott prompted four AI tools — Claude (Anthropic), ChatGPT (OpenAI), Grok (xAI), and Gemini (Google) — across multiple parallel sessions. He answered when the tools prompted back with questions. He scanned the output. He decided when to ship. He did not edit the prose. He did not check the citations against primary sources. He did not verify the model math. He did not read every piece carefully before publication. No human expert — no tax lawyer, no IFS economist, no CIOT or STEP member, no journalist with subject expertise — reviewed the work before it went out.
The AI tools produced the writing, the structure, the analysis, the citations, the modelling, the code, and the adversarial cross-critique that has shaped the work across multiple revisions. The "rounds of substantive critique" the publication has talked about are AI tools critiquing each other's output. They are not human expert review.
This is a working pattern. It produced this publication. It also has known limits, set out in section 5 below: the AI tools converge on confident-sounding claims that are sometimes not established by the evidence, the cross-critique catches some errors and misses others, and a specialist reader engaging with the result should expect to find errors that AI review did not catch. The publication invites those corrections.
An earlier version of this piece described the same workflow using "architect" framing — that the human "judged every draft, ran the substantive disagreements until they resolved, and rejected directions that did not survive scrutiny." That language overstated the human contribution. The truthful version is what is set out above. The publication apologises for the overclaim and is correcting it across the corpus.
3. The iteration history
Article One is not the first thing the author wrote on this question. It is the seventh.
The first version was a critique of an uploaded founder-lobby submission, written in advocacy register, arguing that the submission overstated the case for retaining the unreformed regime. That version was strong as critique and weak as policy analysis; the author rejected it.
The second version was a consolidated submission written under the author's own name, arguing for a position closer to what would later become Position B. That version was honest about its advocacy but treated the empirical questions as more settled than they are; the author rejected it.
The third version was a Treasury policy advice document, written as if the author were a Treasury official advising ministers. It was useful as an exercise in seeing the question from the inside but read as ventriloquism; the author rejected it as a stand-alone publication, though elements of it survived into what eventually became the policy options paper.
The fourth version was a hard-Labour rebuttal — an attempt to write the strongest possible defence of the reform from the position of someone politically opposed to softening it. This was important: it taught the author that Position A had a stronger case than the founder-lobby framing allowed for. Elements of it survived into the article's treatment of Position A.
The fifth version was a neutral policy map — an attempt to lay out the question without recommending anything. The author published this internally and then rejected it for publication on the grounds that it was unable to engage with the empirical evidence; a map without analysis is just a list.
The sixth version was the recommendation version of Article One, advocating Position C with hard triggers as the right response. This was published. A reader then noted that it was advocacy with the architecture of analysis rather than analysis itself.
The seventh version — the article currently published — was rewritten in conditional-analysis register in response to that critique. The framework that the sixth version had recommended is presented in the seventh as one possible response, not as the right answer. Section 5, on what different evidence would mean, was added. The postscript explains the rewrite. This is the version that currently sits at the URL.
Subsequent revisions added: the timing-data section (Section 1.5) drawing on the OBR's January 2025 supplementary forecast, the FT/Companies House data, and the Sifted adviser-survey reporting; an expanded comparators section; a strengthened limits section; and the policy options paper as a companion document in HMT format.
Naming this iteration history matters. The article that exists now is not what the author thought he should write at the start. It is what the work itself produced, in dialogue with critique, and the publication's stance is that this is a feature rather than a defect. A piece that survives five revisions is more trustworthy than a piece that does not, provided the revisions are visible.
4. The data work
The article relies on a small number of public data sources. Each was found, cross-referenced, and weighted against the others before being incorporated. The author had access to no HMRC microdata, no internal HMT working papers, and no AEOI exchange information. The analysis is built on what is publicly available, which is less than ministers have but more than most public commentary draws on.
The strongest single source is the Office for Budget Responsibility's January 2025 supplementary forecast information release on the costing of reforms to the non-domicile regime. This is the document that publishes the official UK government behavioural assumptions — 25 per cent departure rate among non-doms with excluded property trusts, 12 per cent among other non-doms — that underpin the £33.9 billion the reforms are projected to raise. It is not a lobby figure. It is the central case in the costing the OBR scrutinised and certified. The author found it via a footnote in the Tax Policy Associates critique of the Henley & Partners migration figures, which is itself the second-strongest source in the article.
The Tax Policy Associates work, published 27 July 2025 at taxpolicy.org.uk, is a forensic critique of the methodology underpinning the widely-cited Henley & Partners projection of 16,500 UK millionaire departures in 2025. The Tax Policy Associates piece is signed, dated, draws on statistical analysis and on the author Dan Neidle's specific expertise as a tax-policy lawyer, and was the source for the article's decision to cite the Henley figure only with the methodological objection attached. Tax Justice Network reached compatible conclusions independently. A reader who wants to evaluate the Henley figure for themselves should read both critiques and the Henley methodology itself; the article's view is that the Henley figure should not be cited as evidence by anyone serious about this question.
The Financial Times analysis of Companies House director-residency records, published 2025, is the cleanest single piece of behavioural evidence available. 3,790 UK company directors changed their primary residence to abroad between October 2024 and July 2025; 2,712 in the same period a year earlier; 691 in April 2025 alone. The data is drawn from a public statutory register and reported by a credible news organisation. It does not capture households below director level — a significant under-count of the wider behavioural response — but it is verifiable, attributable, and not subject to the methodological objections that affect the Henley figure.
The Sifted reporting from May 2025 cites four named UK tax-advisory firms — Wilson Partners, Evelyn Partners, Founders Law, and Capital Partners — confirming a marked uptick in UK-based tech-founder enquiries about Dubai relocation. Founders Law specifically reported that UAE relocation now features in 15–20 per cent of all new business enquiries received by the firm. This is adviser-survey evidence rather than departure evidence, but it has the advantage of being attributable: the firms are named, the publication is reputable, and the reporting can be checked.
The article also cites named individual departures (Nik Storonsky of Revolut, Herman Narula of Improbable, Lakshmi Mittal among non-tech billionaires), the OBR's 25 per cent assumption, and the comparator country data on Australia, Canada, the United States, Germany, and France. Each of these has a single primary source, cited in Annex A of the policy options paper.
What the article does not have, and could not have, is HMRC's internal modelling on the relocation channel for the BPR-affected cohort. That modelling either has not been done, or has been done and not published. The article's framework calls for it to be done and published; in its absence, the article works with the public data and is honest about what the public data does and does not establish.
5. What the AI tools did well, and what they did badly
Honest framing matters more here than reassurance. The four AI tools used were not equivalent. They produced different kinds of output, with different failure modes, and prompting multiple tools in parallel surfaces information that prompting one would not.
What the tools did well, collectively. They produced first-draft prose at a quality Doug would not have produced on his own under the same time constraints. They held large amounts of material in working memory across long sessions and could be redirected without losing context. They found relevant comparator data quickly when given the right query. They could be asked to take a specific position and write the strongest case for it, which was useful in stress-testing the article's arguments. They could critique drafts in ways that surfaced weaknesses none of the other tools had seen. They could not, however, do anything Doug or another human had not asked them to do; what was missing from the prompting did not appear in the output.
What the tools did badly. They tended to converge on confident-sounding claims that were not in fact established by the evidence. The first draft of the section on Australian CGT-rollover-at-death contained a claim about the size of Australian intergenerational wealth transfer that was not in any cited source; AI cross-critique caught it. The first draft of the comparator section claimed the proposal "aligns with Canadian and Australian practice", which was loose drafting that elided a structural difference between the two regimes; this was caught by AI cross-critique and corrected in a later revision. The tools, working alone, would not have caught either error. They produced both errors, and a different AI tool then caught them. There was no human expert review at any point.
The tools were also unreliable on questions where the training data is weighted toward one position. On the political-economy question of whether mobile high-net-worth populations actually relocate in response to tax change, the tools tended to produce a moderate consensus position that did not engage seriously with either the lobby framing or the sceptical framing. Getting them to take either side seriously required explicit instruction. On the methodological question of how to read the Henley & Partners figure, the tools produced default-credulous summaries of the headline number until prompted with the Tax Policy Associates critique.
Where human judgment played some role. The decisions about what register the article should be written in (advocacy versus conditional analysis); which AI critique objections landed and which did not; how to handle the disclosure of the author's interest; whether to cite the Henley figure at all; what the political room actually contained for the analysis to be useful; and what the publication should and should not do in this case — these were the points where Doug's prompting and answering shaped the result. To be clear about the limits of even this: Doug did not edit the prose, did not check the citations, did not verify the model math, and did not have the work reviewed by any human expert. The prompts and the scan-and-ship decisions are the human contribution. Earlier framings of this section described these as "decisions the architect made" — that language overstated the role.
6. The disclosures
Article One, the policy options paper, and every piece in this series carries a two-part disclosure at the top: that the work was produced by Doug Scott (founder and ex-CEO of Redbrain.com) prompting four named AI tools and that no human expert reviewed the result; and that the reform affects the author and his family directly. The publication's view is that both disclosures are necessary, and that omitting either would be a small dishonesty.
The AI methodology disclosure is necessary because the work is meaningfully different from work produced without AI tools and a reader is entitled to know. The personal-effect disclosure is necessary because the analysis would benefit the author and his family if some of its directions were adopted, and a reader is entitled to weigh the analysis with that knowledge. The author has tried to be impartial to the best of his abilities. He cannot be the judge of how successfully. Neither disclosure is a defence of the analysis. The analysis has to stand on its own merits. The disclosures exist so the reader can do that work without being misled about what they are reading.
This piece carries the same disclosures, including the disclosure that the author's personal exposure to the reform might bias the methodology choices described above — for example, the choice to weight the Tax Policy Associates critique of the Henley figure as heavily as the article does is not a neutral methodological choice. It is a choice that makes the most-cited "departures" figure unavailable to the public debate, which is a result the author has reason to prefer. Doug believes the choice is correct on the methodology, but the reader should know that "I am personally affected by which way this question lands" is true alongside "this position is correct on the methodology."
7. What this approach is for, and what it isn't
The approach described here — citizen prompting AI tools in parallel, AI tools producing the work and cross-critiquing each other's output, honest disclosure of the limits including the absence of human expert review — is one way to do analysis on contested questions. It is not the only way, it is not necessarily the best way, and it depends on the prompting being good enough to surface the right questions. The substantive value of the work depends on what the AI tools produce, what AI cross-critique catches, and what gets through both. The tools made some of the work easier and some of the work harder. They did not make the analysis itself more reliable in any way that does not depend on prompting quality, AI cross-critique catching errors, and a willingness to publish corrections when specialist readers find errors AI review missed.
What the approach is for: producing analysis on contested questions where the available evidence is partial, the political stakes are real, and the author is personally affected by the outcome and needs to declare it. Used with honest disclosure throughout — including the disclosure that no human expert reviewed the result — it lets a reader weigh the work appropriately.
What the approach is not for: producing analysis where the answer is clean, where the data is settled, where the author is not personally affected, or where the publication is happy to present a single confident position. For those, single-author work without AI builders is at least as good and probably faster.
A reader who reaches the end of this piece should know how Article One was made. They should also know that knowing how it was made is not a substitute for engaging with the analysis itself. The disclosures are an aid to reading; they are not the reading.
The article is at the URL printed below. The other registers — the plain English versions, the political piece, the honesty piece, the policy options paper — sit alongside it for readers who want the same analysis pitched at different audiences.