Bostrom's trilemma is logically valid but empirically empty. The real question isn't probabilityit's whether 'simulated' is even a meaningful category.

Bostrom's argument has the structure of a disjunction. At least one of the following must be true:
| Proposition | Description |
|---|---|
| (1) Extinction | Almost all civilizations at our level of development go extinct before becoming technologically capable of running high-fidelity ancestor simulations |
| (2) Disinterest | Almost all posthuman civilizations that could run such simulations choose not to |
| (3) Simulation | Almost all beings with experiences like ours are living in simulations |
The logic follows directly. If (1) is false (civilizations survive) and (2) is false (they run simulations), then the number of simulated minds vastly outnumbers biological minds, making it statistically overwhelmingly likely that any given mindincluding yoursis simulated.
What the argument does: Forces you to accept at least one of these three propositions.
What the argument doesn't do: Tell you which proposition is true, assign probabilities to them, or provide any way to distinguish between them empirically.
Note the gap here. When Musk says there's a "one in billions" chance we're in base reality, he's not doing mathhe's making an intuitive judgment that (1) and (2) are unlikely. The argument itself is agnostic.
Bostrom estimates that simulating a human brain requires approximately 10^14 to 10^17 operations per second (100 teraFLOPS to 100 petaFLOPS). This is based on estimates of synaptic operations (~10^14 synapses × 10-100 Hz firing rates).
To simulate the entire mental history of humanity (~100 billion humans × ~50 years average lifespan × ~10^7 seconds per year), you'd need roughly 10^33 to 10^36 operations.
A civilization with access to a planet-sized computer using advanced nanotechnology could theoretically achieve 10^42 operations per second. This means:
"A single such computer could simulate the entire mental history of humankind... in about one second."
This sounds impressive, but it embeds assumptions that are doing enormous work:
The entire calculation assumes consciousness is substrate-independentthat the same computation produces the same experience regardless of what's doing the computing. This is the functionalist position in philosophy of mind, and it's contested.
If consciousness requires specific physical processes (biological neurons, quantum effects, or something else we don't understand), then simulating the computation wouldn't produce simulated experience. The simulated humans would be philosophical zombiesbehavioral duplicates without inner lives.
We have no way to resolve this empirically. We can't even define consciousness rigorously enough to test whether it's substrate-independent. The simulation argument's probability math only works if we've already decided this philosophical questionand we haven't.
What level of detail is required? Bostrom assumes simulating neural activity is sufficient. But what if consciousness depends on:
Each additional level of detail increases computational requirements exponentially. Simulating a universe at Planck resolution would require more compute than exists in the observable universemaking simulation physically impossible regardless of technological advancement.
Bostrom addresses this by suggesting simulators could use "lazy loading"only rendering reality in detail when observed. But this creates its own problems (discussed below).
Several researchers have proposed empirical tests for simulation. None have produced positive results, and most have conceptual problems.
Proposal: If the universe is simulated on a discrete lattice (like pixels), ultra-high-energy cosmic rays should show directional dependence correlated with lattice orientation.
Result: No such effect has been detected. The GZK cutoff (the energy limit for cosmic rays) is explained by conventional physics (interaction with cosmic microwave background).
Problem: The test assumes our simulators use lattice-based simulation. They might not. A null result doesn't rule out simulationit rules out one specific simulation architecture.
Proposal: Look for variations in fundamental constants over time or space, which might indicate simulation parameters being adjusted.
Result: Measurements show the fine-structure constant (α ≈ 1/137) is stable to ~10^-17 per year. Some studies claim distant quasar variations, but these remain controversial and unconfirmed.
Problem: Constant constants are equally consistent with base reality. If simulators wanted stability, they'd program stability.
Proposal: Check whether information is conserved in black holes, as required by some simulation architectures.
Result: The black hole information paradox remains unresolved. Recent work (including Hawking's final paper) suggests information is preserved, but this is also predicted by non-simulation physics.
Problem: Every test result is consistent with both "base reality" and "well-designed simulation." This is unfalsifiability in disguise.
Here's where the simulation argument encounters its most serious objectionnot from physics or computation, but from philosophy.
What exactly would make reality "simulated" rather than "real"? The standard answer: simulated reality runs on computational substrate in a more fundamental reality.
But consider: our reality also runs on somethingquantum fields, spacetime geometry, whatever physics bottoms out at. Why is "computational substrate" categorically different from "physical substrate"?
If the experiences are identicalif simulated consciousness has the same qualia, the same sense of reality, the same causal powersthen "simulated" may be a distinction without a difference. It would be like asking whether your thoughts are "real" thoughts or "brain-process" thoughts. The question dissolves under examination.
Bostrom suggests simulators could save computation by only rendering reality in detail when observed. But this creates a verification problem:
If unobserved reality isn't rendered, how do we detect this? Any test we run renders the thing we're testing. We can never catch the simulation "not rendering" because observation triggers rendering.
This makes the hypothesis unfalsifiable by design. Unfalsifiable hypotheses aren't necessarily falsebut they're not scientific, either.
If we're simulated, our simulators might also be simulated. A potential infinite regress emerges.
Some argue this makes simulation more likely (there are infinitely many simulation levels but only one base reality). But infinite regress arguments are notoriously slippery. You could equally argue:
The regress neither confirms nor refutes simulationit just makes the question more confusing.
Dismissing the argument as "mere philosophy" misunderstands what philosophical arguments do. The trilemma is logically valid. If you accept the premises (posthuman civilizations are possible, consciousness is substrate-independent, simulations are computationally feasible), the conclusion follows necessarily.
The question is whether the premises are trueand those are empirical questions we currently can't answer.
Moore's Law skeptics point out that exponential growth can't continue forever. But Bostrom only needs some civilizations to reach posthuman capabilitynot all, not most. Even if technological development plateaus, it might plateau at a level sufficient for ancestor simulation.
We genuinely don't know the physical limits of computation. Estimates range from 10^120 operations (Bekenstein bound for the observable universe) to much higher for hypothetical physics we don't understand.
This is actually correctbut it's not the devastating criticism people think. The argument doesn't prove we're simulated. It proves that accepting common assumptions about technology and consciousness forces you to accept one of three uncomfortable conclusions.
If you find all three conclusions implausible, you need to identify which premise is false. That's intellectually productive, even if the argument itself doesn't resolve the question.
This probability claim appears nowhere in Bostrom's paper. It's a misreading that conflates "if (3) is true, most minds are simulated" with "there's high probability (3) is true."
The argument is explicitly agnostic about which proposition holds. Assigning probabilities requires additional assumptions Bostrom doesn't make.
Why? If the simulation is indistinguishable from base reality, your actions have the same consequences within it. Your relationships, experiences, and choices matter exactly as much as they would in base reality.
Some argue we should try to "signal" to our simulators or behave in ways that make them less likely to terminate the simulation. But we have no idea what would please hypothetical simulators, and trying to game unknown preferences is as likely to backfire as succeed.
The claim that quantum phenomena (wave function collapse, observer effects) indicate simulation is based on misunderstanding physics. Quantum mechanics works the same whether reality is "simulated" or "base." Decoherence doesn't require conscious observersit happens through interaction with any sufficiently complex environment.
Quantum mechanics is weird, but "weird" doesn't mean "simulated."
Rather than asking "are we simulated?"a question that may be unanswerable and possibly meaninglessconsider what the simulation argument actually reveals:
The argument only works if consciousness is substrate-independent. We don't know if this is true. The simulation hypothesis is really a consciousness hypothesis in disguise.
Proposition (1) is about existential risk. Are we likely to survive long enough to create posthuman technology? The simulation argument forces engagement with this questionwhich matters regardless of simulation.
If "simulated" and "real" aren't meaningfully different, our entire framework for thinking about reality might need revision. This isn't unique to simulation theorysimilar problems arise in philosophy of mind, metaphysics of time, and quantum interpretation.
Some philosophical questions don't have answersthey have dissolutions. "Is the universe simulated?" might be like "what's north of the North Pole?" The question presupposes a framework that doesn't apply.
Bostrom's simulation argument is a genuine contribution to philosophy. It's logically valid, forces examination of important assumptions, and reveals how little we understand about consciousness, computation, and the nature of reality.
But it's also systematically overclaimed. It doesn't prove we're simulated. It doesn't assign probabilities. It doesn't provide empirical tests. It's not falsifiable in any meaningful sense.
The intellectually honest position is uncertainty. We don't know if consciousness is substrate-independent. We don't know if posthuman civilizations will exist. We don't know if "simulated" is even a coherent category.
What we know is that we experience reality. That experience has structure, meaning, and consequence. Whether the substrate is "base" or "simulated" changes nothing about how we should livewhich suggests the question, however fascinating, might be less important than it appears.
The simulation argument is less a theory about reality and more a mirror reflecting our confusion about consciousness, computation, and what it means for anything to exist at all. That confusion is worth examining. The conclusionthat we're probably simulatedis worth considerably less confidence than its proponents claim.
| Claim | Status |
|---|---|
| One of three propositions must be true | Valid (given premises) |
| We're probably in a simulation | Not claimed by argument |
| We can test whether we're simulated | Probably not (unfalsifiable) |
| It matters whether we're simulated | Unclear (depends on meaning of "simulated") |
| Component | Estimate |
|---|---|
| Brain operations per second | 10^14 - 10^17 FLOPS |
| Total human mental history | ~10^33 - 10^36 operations |
| Planet-sized nanocomputer | ~10^42 FLOPS |
| Planck-scale universe simulation | >10^120 operations (infeasible) |
Join my newsletter to get notified when I publish new articles on AI, technology, and philosophy. I share in-depth insights, practical tutorials, and thought-provoking ideas.
Technical tutorials and detailed guides
The latest in AI and tech
Get notified when I publish new articles. Unsubscribe anytime.