The structure of the workflow
The author worked across multiple AI tools throughout the collaboration. One central conversational thread ran in Claude (accessed through Claude.ai's chat interface, with code-execution tools, file-editing tools, web-search, and bundle-delivery tools available in that thread). That central thread held the project state, did the building, ran the rechecks, produced the bundles, and is the source of the verified numbers below.
Around that central thread, the author also ran many additional sessions of other Claudes, other ChatGPTs, and other Groks — separate sessions in those tools' consumer products, used for cross-critique, for second opinions, for stress-testing arguments, for pulling specific source material, for the AI-asked-to-pick exercise that produced the three companion pages, and for whatever other distributed work happened that fed into the central thread. The author then pasted the relevant outputs back into the central thread, where the assembly and the publication-grade output happened.
This pattern is honest to disclose because it is not what most readers will assume "AI-tool-assisted" means. The shorthand suggests "a few AI tools." The reality is many sessions across multiple tools, with one central thread doing the integration. Both the central thread and the fan-out matter; the publication would not exist in its current form without either.
The verified numbers — from the central thread
These are countable from the conversation transcripts of the central Claude thread. They are not estimates. They cover the thread that produced this publication; they do not cover any work in other AI tools.
| Measure | Count |
|---|---|
| Wall-clock span of the collaboration | 33.2 hours (1.38 days) |
| Active engagement time (gaps ≤ 5 min summed) | 11.8 hours |
| Idle / away time (gaps > 5 min summed) | 21.4 hours |
| Active fraction | ~35% |
| The author's messages (turns) | 192 |
| — short messages (< 200 words; instructions, replies, redirections) | 177 |
| — long messages (≥ 200 words; pastes from external AI tools, longer instructions) | 15 |
| Tool calls executed in the central thread (bash, file edits, builds, etc.) | 1,892 |
| Bundle deliveries (versioned outputs presented for review) | 198 |
| Distinct documented revisions (corrections-log entries) | 53 |
The 11.8 active-engagement figure is consistent with both the methodology piece's "roughly eight hours of real work" for the IHT publication itself and the about page's "a day of concentrated AI-assisted work." Eight hours of focused IHT work, sitting inside an 11-12 hour active span across the wider collaboration window that includes review, redirections, course corrections, and bundle inspection.
The author's estimate — from the fan-out
These are not verifiable from the central thread because they happened in other tools. They are the author's own estimate of the fan-out work that fed in:
- Other Claude sessions — many. Used for second opinions on framing decisions, for pulling additional source material, for stress-testing the principle and timing arguments before committing to particular wordings in the central thread, and (as one specific named instance) for the Claude Opus 4.7 response in the AI-asked-to-pick companion series.
- ChatGPT sessions — many. Used similarly for cross-critique, source-pulling, and second opinions, and (as one specific named instance) for the ChatGPT Pro running GPT-5.5 Pro response in the AI-asked-to-pick companion series.
- Grok sessions — many. Used similarly, and (as one specific named instance) for the Grok 4.3 Beta response in the AI-asked-to-pick companion series.
- Gemini — attempted, ultimately not usable for this work. The author reports that Gemini could not effectively spider the publication's site in the way the other tools could, and could not read the bundled
thelongerlook-site.zipfiles because each contained 50+ files. After several attempts the author moved on without it. The publication does not draw a categorical conclusion about Gemini's capabilities from this; the limitation may have other explanations (account tier, file-size limits on the particular product surface used, the specific version available at the time). The honest report is just "the author tried it and ran into these specific limits, so it was not part of the workflow that produced this publication."
An exact count of fan-out sessions is not available. The order of magnitude is "many tens, possibly low hundreds, across the four-week practice and this publication's day of concentrated work." The author has not kept a running log. This page is the publication's most honest available account.
The AI-asked-to-pick companion series — one specific instance of the pattern
The clearest visible instance of the multi-AI workflow is the three-page companion series in which the same prompt — "Now you have read all the arguments what would you do assuming you could define the UK government policy. Assume the government is making its decision in the best spirits to the benefit of the country." — was put to three separate AI tools. Their verbatim responses are reproduced at:
- Claude Opus 4.7 (Anthropic) — picked B+C+D with the threshold-lift sized at £5m/£10m and an explicit acceptance of the principle question.
- Grok 4.3 Beta (xAI) — picked an A-core hybrid with a B-style realisation election and a D-style cap lift to £5m/£10m, plus an additional package of practical measures.
- ChatGPT Pro running GPT-5.5 Pro (OpenAI) — picked a two-track active/passive split with realisation-deferred collection on the active track.
Each page reproduces the tool's response verbatim, with no editing. None of them is the publication's view; the publication continues to not adjudicate. The three pages are explicit methodology disclosure: a reader who wants to see what AI tools say when forced past their hedging can compare the three side-by-side. The pages are framed throughout to make it impossible for a casual reader to mistake them for the publication's own conclusion.
What the central thread did and did not do
The central thread (Claude, accessed through Claude.ai's chat interface) did:
- build all the HTML, CSS, JavaScript, and downloadable artefacts;
- maintain the corrections-log discipline of recording every substantive revision;
- run the structural rewrites (the four-positions architecture flatten, the principle-piece and timing-piece rewrites, the institutional cross-references, the consistency cleanup, the mobile responsive fixes);
- produce the verified numerical-fact backing where it was needed;
- assemble the verbatim AI-asked-to-pick pages from the responses the author pasted in;
- flag its own limitations openly, including this disclosure page.
The central thread did not:
- generate the responses on the AI-asked-to-pick pages — those are from the named tools, verbatim;
- conduct the fan-out cross-critiques in other AI sessions that fed in — those were the author's separate work in other tools;
- have visibility into anything that happened outside the central thread, including how many other AI sessions ran, what they produced that did not come back as a paste, or how much was discarded;
- independently verify the author's account of the fan-out (this page).
What this disclosure does and does not do
What it does: states openly that the publication's actual production was a multi-AI fan-out with one central thread, not a single AI-tool conversation; gives verifiable numbers for the central thread; gives the author's honest account of the fan-out alongside; names which AI tools were used and which (Gemini) was attempted and dropped, with the reported reasons; and points to the AI-asked-to-pick companion series as the one specific visible instance of the pattern.
What it does not do: it does not change the substantive analysis on any contested question — the publication still does not adjudicate either the principle question or the timing question, and the four design positions A, B, C, D are still presented at equal length with case-for and case-against in the voice of each side's strongest defenders. This is methodology disclosure, not a substantive policy update.
Where to go next: the production-story methodology piece describes the IHT day's work in narrative form. The earlier methodology piece describes the iterative process and is preserved alongside its corrections. The about page sets the publication in the context of the broader four-week practice that produced the books, the trilogy, and The Many Builders. The frame page sets out the lens within which the analysis is conducted. The corrections log records every documented revision.