Philosophy Bro explains complex ideas of philosophy in easy to understand language, created by Tommy Maranges, the author of Descartes' Meditations, Bro.

Mailbag Monday: The Simulation Argument

Mailbag Monday: A weekly segment that covers readers’ questions and concerns about all things Philosophy, Bro, and Philosophy Bro that don’t quite fit anywhere else. Send your questions to philosophybro@gmail.com with ‘Mailbag Monday’ in the subject line.


zepfan5150 writes,

Hey Philosophy Bro,I’ve been hearing a lot recently on the simulation argument..any chance you wanna try and tackle it? You’ve explained everything else it seems like.

Oh man, the simulation argument. Alright, look. The simulation argument falls into the category of Cartesian Mindfucks, the sort of arguments that ask, “Is the universe really, truly the way we perceive it to be or are we somehow being deceived?”

This particular argument proceeds by presenting three different propositions, at least one of which is probably true. Those three propositions are, in a nutshell: (1). We’ll probably go extinct before we evolve past humanity, to what the original author calls “posthumanity”, (2) If we do achieve posthumanity, we won’t really care enough to run tons of simulations of our ancestors, or (3) We are living in a simulation.

At face value, it’s easy to see how one of those has to be true. Let’s say there’s this race of superbros who are technologically way the fuck ahead of us. That’s not hard to imagine - you’re likely reading this on a computer that is thousands of times more powerful than the very first supercomputers, built only decades ago, and they stored those fuckers in entire rooms. Imagine the shit we’ll have in a couple centuries - crazy fast computing. Ridiculous. So let’s say these bros have that level of technology. They could, if they wanted to, run simulations of human evolution, right down to the very thoughts of each simulated person. In fact, they could run millions of these things. And if the did run these things, then the chances that we’re a simulation, instead of a race that will eventually run simulations, is literally millions-to-one. Those are not good odds, kids. Either we’re almost definitely a simulation, or these simulations just don’t happen.

What exactly are these simulations I keep talking about? This is the Cartesian Mindfuck part. What if our brains are actually just giant processors in a computer somewhere? After all, as far as we know, the brain is just a bunch of individual cells linked together, but we only know that from looking at brains, feeling them, examining them; this could all be an illusion. Given sufficiently advanced technology, we could definitely simulate a neural network of connections large enough to mimic a brain; sure, it would be crazy now, but these guys are way advanced, remember. What is the difference between a quadrillion connections in the brain and a quadrillion connections on a computer chip if they can process all the same stuff? Throw in some software that mimics our desires to have lots of sex and to belong to groups, and baby, you’ve got yourself a stew.

So, we could be a computer simulation? I mean, maybe. For some reason, people think that if a machine gains sentience, it realizes that it’s a machine and starts mowing down humans as fast as it possibly fucking can. But really all we mean by 'sentience’ is 'having consciousness’, or 'able to have subjective experiences’, or something like that - philosophers don’t exactly agree on what 'sentience’ is, but we mostly agree that it’s something special that sets us apart from our laptops. Self-awareness, maybe. But self-awareness doesn’t mean that you know everything about the conditions of self. Why couldn’t we just as easily be a program in a computer as a brain in a vat? What’s so special about biological tissue? Why can’t a computer chip be the basis for consciousness?

Look at it this way - if, in The Sims 47 or whatever iteration we’re on, one of your characters became sentient, would he know he was in a computer, or would he just be making more complex decisions? All he’s ever seen is the world he inhabits, the house you’ve built for him. He would flirt with a girl down the street and wonder why she’s so boring; he’d have no reason to suspect she was a computer program - I know I’ve met girls who couldn’t pass the Turing Test, but I never check them for batteries. He’d have even less reason to suspect he was a computer program. What if we’re just like the Sims, and it’s taken us thousands of years to realize it?

So now that I’ve sufficiently freaked you the fuck out, let’s take a deep breath and see if there’s a way out. Is there any way these simulations would not get run? Well, sure, and those are the first two propositions. The cool thing about this particular paper is that it gives us the way out of skepticism, ready-formed. We should very seriously consider whether anyone makes it to this level of technology, and if they do, whether they would run the simulations at all.

First, the simulations won’t get run if civilizations tend to go extinct before they have powerful enough technology. Have you met us? I’m not saying we’re all absolute dickheads, but given enough dickheads with too much power, it’s not hard at all to imagine the myriad ways we might kill each other off before we can build the level of supercomputer we’d need. If this is an evolutionary pattern, if advanced civilizationskeep fucking themselves up, then the same thing would probably happen to any civilization that approached this level of technology, in which case no one would ever get advanced enough to run simulations, and you can sleep soundly.

Hell, we wouldn’t even necessarily need to kill each other off; we would just need to never reach the point where we can run these super-complex programs. Maybe we’ll be around for another million years, but we never have the raw materials to make a computer big and complex enough. Maybe we’re at the edge of computing power - we are talking about ridiculous amounts of processing here.

Even if some civilbrozation made it to the point where they can run these simulations, they just don’t give enough fucks to run them. Maybe they figured out evolution some other way; maybe they don’t want to waste the computing power. Who the fuck knows why? There are lots of reasons they might not run simulations; maybe something about getting past the dickhead stage of human civilization made them decide not to create sentient simulations and expose them to pain - maybe this is an incredibly ethical bunch of geniuses.

Those are the things that might prevent us from being a simulation, but if it’s possible to develop the technology, and bros with the technology decide they want to simulate their ancestors for whatever reason, maybe it’s possible that we really are just a part of that simulation. Like all the other Cartesian Mindfucks, there’s no real way to prove otherwise. I mean, if one of your Sims became sentient and started fucking shit up, you could just delete him. Just because he’s sentient doesn’t mean he knows how to get out of the game and into the rest of the computer: he is, as we are, still constrained by the environment around him. On the other hand, this is unlike those other Mindfucks in that it gives us clear alternatives to the skepticism. The argument isn’t just that we could be computer simulations; the argument is that either we’re a simulation, or simulations don’t get run. Perhaps there’s something about the direction of our advancement that isn’t obvious, but that we should look into - unlike the other skeptical scenarios, this one kindly suggests really interesting avenues of inquiry into human nature.



The popularity of this idea led the paper’s original author to set up a website that you can find here. That site has other resources as well as a link to the original paper, which contains some mathematics of probability but is otherwise fairly accessible. The FAQ is also particularly clear.

Mailbag Monday: Patriotism

Mailbag Monday: Platonic Forms