Abstract composition of a circular seminar table rendered as overlapping architectural plan lines, anchored by a bold red mark at the empty seventh seat
researchApril 16, 20265 min read
By

I Asked Six Dead Thinkers to Audit the AI Industry

Can the frontier AI labs really govern themselves? I sat six twentieth-century thinkers around a seminar table, applied Ostrom's scorecard for managing a commons, and let Hayek send four dissents from outside the room.

For about a year and a half I have been reading AI-industry writing with a specific kind of frustration. The pieces are often well-written, and rarely naive about technology. But they read, over and over, as if governing frontier AI labs were a brand new kind of problem — as if no one had ever tried to write rules for an industry whose participants know more than the regulators, or manage a commons whose users have an incentive to overuse it, or put a dashboard in front of a bureaucracy systematically blind to the damage it was doing.

Those problems all have literatures. The people who wrote those literatures are, almost without exception, dead. So I wrote an essay called *Seeing Like an AI Company*, and I built it as a seminar room.

Six twentieth-century thinkers sit at the table: Susan Leigh Star, James C. Scott, Elinor Ostrom, Karl Polanyi, Ursula Franklin, John Kenneth Galbraith. A seventh chair stays empty, because Friedrich Hayek would not have come — but he sends four written dissents the reader can open as the Council proceeds. Eight sessions. A wall at the back where arguments accumulate as colored index cards. No consensus at the end.

Why a Seminar Room and Not a White Paper

The format is not a gimmick — or it is a gimmick, but a load-bearing one.

When people cite Scott in AI-policy writing, they almost always cite him once, reach for the word *legibility*, and move on. When they cite Ostrom, they reach for *commons*. When they cite Star, they usually don't, because she's harder. Each citation works like a stamp: you've signaled you've read the thing, and now you can proceed.

These thinkers were not writing stamps. They were writing arguments that responded to each other and to their moment. Ostrom's design principles were a direct reply to Hardin's *Tragedy of the Commons*. Star's infrastructure studies came out of a thirty-year fight with the assumption that categories are neutral. Scott spent *Seeing Like a State* refusing, case by case, the idea that bureaucratic simplification is an improvement on local knowledge.

Putting them in a room forces the citations to become exchanges. Scott has to answer Ostrom's question about monitoring. Franklin has to answer Galbraith's question about countervailing power. Hayek's envelopes arrive at inconvenient moments, usually when the Council has talked itself into a consensus it has not yet earned.

That is what the AI-industry conversation has not had: the cross-examination.

Experience it yourselfEnter the Seminar Room

Ostrom's Scorecard

The third session is the one that changed the way I thought about the industry.

Ostrom spent her career documenting the conditions under which a shared resource — a fishery, a forest, a groundwater basin — gets managed well over the long run. Her eight design principles are not aspirations; they are what shows up empirically in communities where the commons has survived a hundred years. Clear boundaries. Rules matched to local conditions. Participation in rule-making by the people affected. Effective monitoring. Graduated sanctions. Low-cost conflict resolution. External recognition of the right to self-organize. Nested governance for large systems.

When you apply these principles to the frontier-AI industry as if it were a commons, the scorecard that accumulates on the Wall reads: *five absent, one gestural, one compromised, one partial.*

Two of those labels deserve the specificity. *Gestural* describes the monitoring principle. Red-teaming exists. Evals exist. But the monitoring is run by the party being monitored, on dimensions that party has defined, against benchmarks that party controls, and it does not detect the class of slow structural harms — labor displacement, epistemic pollution, institutional deskilling — that do not fit inside a pass/fail prompt. *Compromised* describes graduated sanctions: if the lab breaks its own policy, the lab decides the consequence. Ostrom's literature has a name for that, and the name is not *self-regulation*. The name is *no sanctions*.

The scorecard is where readers tend to go quiet.

The Legibility Trap

Scott's session comes later in the document, and it is the one that turned the essay from an analysis into something closer to an argument.

His classic move is that the state sees forests as timber yields, which makes the forest legible and also makes it fragile. What gets lost when you classify a forest as yield is everything the forest was doing that wasn't yield: understory, soil biota, the slow feedbacks that hold the system together. The plantation is more legible, more productive, and after forty years, sicker.

The red-ink phrase on the Wall from his session is *Waldsterben, 2026*. It is shorthand for a category of failure that a dashboard structurally cannot catch: damage that only becomes visible after the feedbacks are already irreversible. A safety dashboard that reports what it can measure is not a neutral instrument. It is a re-creation of the legibility trap — a bureaucracy of its own, producing its own plantations.

Scott's question is not *what should the dashboard measure?* His question is: *does the existence of the dashboard change what counts as a measurable harm?* The Council's tentative answer, late in the session, is yes. That answer is what connects this essay to two running themes on the rest of the site — that classification is infrastructure, and that infrastructure decides what reality is allowed to look like.

Why Hayek Gets Envelopes

The design decision I am most proud of is the one that took me longest to commit to.

Hayek does not sit at the seminar table. He would have refused the invitation — his disagreements with Polanyi alone would have filled six sessions. Leaving him out, though, would have turned the document into a straw man. So he sends letters. Four envelopes, sealed, that the reader can only open at specific moments in the Council's proceedings.

The first is the one everyone expects — *the fatal conceit*, his argument that central planners cannot possibly have the knowledge their plans pretend to. The second, on freedom to exit. The third, on dispersed knowledge. Each lands when the Council has just convinced itself it has the answer.

The fourth envelope is different. It can only be opened after the adjournment, when the reader has watched the Council fail to reach consensus. I will not spoil what it says. I will say that it is the dissent I find hardest to answer, and that it is not the one Hayek is best known for.

The essay ends where it ends because the question genuinely does remain. *No consensus. The Council disperses. Star takes the scorecard off the Wall, folds it, and places it inside the document.* Whatever you thought about AI governance before you walked into the seminar room is what you carry out of it — only with more company.

I Asked Six Dead Thinkers to Audit the AI Industry — Jake Lawrence