The Diagnostic Machine
Watch a classification system think. Cause weights shifting, topology layers activating, state mismatches being detected.
Experience it →Essays, games, tools, and research from Jake Lawrence.
Watch a classification system think. Cause weights shifting, topology layers activating, state mismatches being detected.
Experience it →Me explaining to a camera at midnight why every password manager is a classification system that sorts your memory into the wrong categories.
Stable Match and the DSM are the same essay. Both are allocation mechanisms that sort humans into categories, both optimize for system stability over individual preference, and both become invisible infrastructure that people experience as fate.
How psychiatric classification systems function as invisible infrastructure. Six interactive experiences across ten disciplines.
Read it →If the planning-execution gap is structural and not motivational, then every productivity system is solving the wrong problem. So what would a system look like that was designed for the gap instead of against it?
Related: The Beautiful UnfinishedAI-generated Jake explains the Gale-Shapley algorithm using only examples from reality TV dating shows.
The algorithm that matches residents to hospitals, students to schools. A theorem you can play.
Play it →Every classification system sorts human beings. The interesting question is never how the sorting works. It's what happens to the people it sorts.
Gina's CRSS exam prep course has 1,462 questions now. Every question is a tiny sorting machine: it decides what counts as competence, what gets to be on the test and what doesn't.
Semantic password retrieval. The gap between how your brain stores context and how password managers force you to retrieve it.
Try it →The most powerful classification systems are the ones nobody thinks of as classification. The DSM. School district boundaries. Credit scores. Job titles. The org chart. Each one sorts people, distributes resources, and becomes invisible to the people inside it.
AI knowledge extraction. The gap between what you said to an AI and what you can find later.
In development →What would it mean to build AI tools that showed you the classification layer instead of hiding it? Not as a feature. As a design principle.
test