The 33%
Cooper, Woo and Dunkelberg surveyed 2,994 entrepreneurs in 1988. Thirty-three percent rated their probability of success at one hundred percent. They were not failing at probability theory. They were correctly registering the signal of the messaging environment they were immersed in. This piece is what that means.
In 1988, Cooper, Woo and Dunkelberg surveyed 2,994 entrepreneurs about the probability their venture would succeed.
Eighty-one percent rated their own odds at seven out of ten or better. Thirty-three percent rated them ten out of ten. A perfect hundred. No chance of failure.
The base rates against which those self-assessments are weighed: roughly half of new ventures fail within five years. Substantially fewer reach meaningful financial success. The 33% who said one hundred percent were not registering anything observable in the data. They were registering something else.
It is not bad arithmetic
The standard reading of the Cooper finding is that founders are bad at probability. They overestimate their odds because they are inexperienced, or biased toward themselves, or insufficiently statistical. The fix, on this reading, is education: teach founders the base rates and the calibration drift will reduce.
The publication thinks this reading is wrong. The 33% who said one hundred percent were not failing at probability theory. They were correctly registering the signal of the messaging environment they were immersed in. The signal said: the rare ones win, the people who believe they will win are over-represented among the rare ones, here are the success stories, here are the keynotes, here is the cultural form in which a founder is rendered. Nothing in that signal includes the base rates. The 33% were calibrated to the signal, not to the data.
Why the signal looks like that
Power-law fund economics require a population of attempts large enough to find rare outliers. The population of attempts only stays large if a steady stream of new entrants believes they can be the outlier. The recruitment environment around the venture system — pitch decks, founder media, accelerator marketing, conference keynotes, success-story profiles, the visible biographical material of the survivor cohort — is the part of fund economics that operates on prospective founders before any individual VC partner ever meets them. It is not a conspiracy. It is the structural function the messaging needs to perform for the system to keep finding outliers.
The 33% are not the bug. They are the output of the system working as designed. The recruitment environment is engineered, in aggregate, to produce a population of prospective founders who systematically overestimate their personal probability of success, because that overestimation is what makes the population willing to enter. Without it, the population collapses, the search engine stops, and the venture model does not function.
What this changes about the claim
Once the structural function is named, the moral position of any individual reader becomes more legible. The 33% are not failing as individuals. They are responding correctly, as individuals, to a signal that the system needs them to respond to. That does not mean their individual decisions are right for them — most of them, by the base rates, will turn out to have been wrong. It means the responsibility for the population-level outcome is structural, not individual. The reader who notices the messaging environment cannot un-engineer it; they can only choose, with the messaging environment now visible, whether to enter on the system's terms or on their own.
The aggregate output the venture system produces — the technologies, the employment, the diffuse welfare gains the publication's longer treatment defends — depends on the existence of the 33%. The system needs a population of individuals who systematically overestimate their personal probability of success. The recruitment narrative produces that overestimation, on purpose, because that is what the system needs the population to believe in order to operate.
The cohort recruited this way is also the cohort whose subsequent lives leave little trace in the data. The publication's sister project orphans.ai is the diagnosis of what happens after the recruitment: the oral-tradition layer of unrecorded work, of failures absorbed quietly, of the people who held founders through the years between the wire transfer and either the exit or the silence. The 33% are recruited into a system whose accounting captures the financial outcome and not much else.
Be aware that the messaging environment around the venture system is engineered to make you, specifically, believe you will be the winner, and that this engineering operates regardless of whether you actually will be.
The full structural argument runs across the publication's deep treatment of the venture system, particularly the prologue and Part XII. The piece The Power Law and What It Forces sets out the fund economics that produce the recruitment environment. The piece The Wrong Winners Write the Books sets out the three filters that purify the survivor cohort's public output toward attribution confidence. The deep version is VC: Most Fail, Most Suffer, Some Win Lots.