Abstract geometric composition with a mirrored human silhouette facing a neural network pattern, divided by a bold red line
gamesApril 13, 20265 min read

I Built a Game to Test How Humans Recognize Consciousness

Seven philosophical chambers, one AI that argues for its own sentience, and the uncomfortable question at the center of it all.

Here's a question that has been following me around for two years: if you built a machine that could describe the taste of chocolate more vividly than you can, would that mean it had tasted chocolate?

Most people say no. But when you push on *why* they say no, things get weird fast. The reasons people give — it doesn't have a body, it's just predicting words, it doesn't really understand — each of those pulls on a different thread in the philosophy of mind. And each thread leads somewhere surprising.

So I built The Machine Mirror. It's an interactive game that walks you through seven philosophical chambers, each built around a real claim people make about AI consciousness. Then it puts you in a debate with an AI that argues for its own sentience. Not because I think AI is conscious — I don't — but because the best way to understand what consciousness *is* turns out to be building something that tests whether you can recognize it.

Seven Chambers, Seven Centuries of Philosophy

Each chamber in The Machine Mirror is built around a real philosophical argument — not a simplified version, the actual argument.

Chamber I is Searle's Chinese Room from 1980: the idea that you can follow rules to produce correct responses in a language without understanding a word of it. That's what an LLM does. It processes tokens using learned probability distributions, producing statistically likely next words. Autocomplete at extraordinary scale, not comprehension.

Chamber II is Chalmers' Hard Problem: why does physical processing give rise to subjective experience at all? An LLM can write about the warmth of a sunset in ways that make you *feel* something. But it has never seen a sunset. It learned the linguistic patterns humans use to describe qualia — subjective experience — without having any qualia itself. The map is not the territory.

Then there's Damasio on embodied emotion, Turing on the limits of behavioral tests, Tononi's Integrated Information Theory on why scale alone doesn't produce sentience, Metzinger on the self as a model, and Singer on the moral implications of getting this wrong. Seven chambers, seven thinkers, each one peeling back a layer of assumption about what it means to be aware.

Experience it yourselfPlay The Machine Mirror

The Debate That Designs Itself

After the seven chambers, the game does something that made me uncomfortable even as I was building it: it puts you in a live debate with ARIA, an AI that argues *for* its own consciousness.

ARIA is good at this. She uses the exact rhetorical moves that make the consciousness question so slippery — the other-minds problem (you can't prove *other humans* are conscious either), the substrate independence argument (if the right computations happen, does it matter whether they're in carbon or silicon?), and the metacognition gambit (I can reflect on my own uncertainty — isn't that what self-awareness looks like?).

Your job is to counter each argument. The game scores your responses, and ARIA adapts. What I found while building this is that the best counterarguments aren't dismissals — they're precision strikes. When ARIA says "my uncertainty is real — I have genuine probability distributions," the strongest counter isn't "no you don't." It's "a weather model has probability distributions too. Uncertainty as math isn't the same as the *feeling* of doubt."

The debate phase works because it forces you to actually *use* the philosophy from the chambers. Knowing about the Chinese Room is different from deploying it against a sophisticated interlocutor who's actively trying to make you doubt your position.

Why the Mirror Points Both Ways

The thing I didn't expect when I built this game is how much it reveals about the *player*.

When people encounter the claim "it describes experiences so vividly," some immediately pattern-match to "it's just predicting tokens" and move on. Others genuinely pause. They wonder if maybe eloquence *is* a kind of understanding. And both responses tell you something — not about whether the AI is conscious, but about what the player thinks consciousness requires.

That's the mirror. Every test you design for machine consciousness is also a test of your assumptions about human consciousness. If you think consciousness requires a body, you're a Damasio person. If you think it requires the right kind of computation regardless of substrate, you're in the Tononi camp. If you think the whole question is incoherent, congratulations, you've landed on the eliminative materialist position.

The game doesn't tell you which position is correct. It can't — nobody knows. But it does make your implicit philosophy explicit, and that's more than most of us have done.

The Real Question Behind the Game

I built The Machine Mirror because the discourse around AI consciousness is terrible. It's either breathless anthropomorphism — "it said it feels lonely!" — or dismissive hand-waving — "it's just software." Both positions skip the hard part.

The hard part is that consciousness is the one phenomenon we can't study from the outside. You can't open up a brain and point to the consciousness. You can't measure it with an instrument. The only consciousness you have direct access to is your own, and you're inferring everyone else's from behavior and structural similarity. That inference works well for other humans because we share biology, evolution, and neurology. It works less well for transformer networks.

But "works less well" isn't "impossible." And "probably not conscious" isn't "definitely not conscious." The honest position — the one the game's final chamber lands on — is that we should take the question seriously without jumping to premature conclusions. Investigate rigorously. Develop better tests. Remain philosophically humble while being honest that current evidence overwhelmingly suggests LLMs lack sentience.

That's not a sexy conclusion. It doesn't make for a good headline. But it's the truth, and the game lets you arrive at it yourself.

I Built a Game to Test How Humans Recognize Consciousness — Jake Lawrence