Why we don't live in a civilization

What's wrong with "we probably live in a simulation"

In 2003, 30-year-old Swedish philosopher Nick Bostrom published a paper in Philosophical Quarterly titled “Are you living in a computer simulation?” The paper came out the same year as The Matrix sequels, and it caused a stir. 

In 2018, the idea is primarily known through its most prominent proponent, eccentric billionaire Elon Musk. When Musk talks about it, he bases his point on the improvement of video games over time — we had Pong in the 70s, we have incredibly realistic 3-D games now. What if we keep improving in that direction? "If you assume any rate of improvement at all,” he said in an interview with Joe Rogan, "then games will be indistinguishable from reality, or civilization will end. One of those two things will occur. Therefore, we are most likely in a simulation, because we exist.”

Elsewhere, he has been quoted as saying, "There's a billion to one chance we're living in base reality."

Americans in particular eat this shit up. We love eccentric billionaires. You must be an outside-the-box thinker to get rich, we assume, and Musk is a billionaire, so this weird shit must by why. On top of this, Elon Musk, much like Donald Trump or Beyonce, is an internet click-getter, so clickbait articles have repeated this over and over again, and if you spend enough time starting fires on the internet, you’ll bump into a Musk bro who’ll shout at you about how we’re living in The Matrix.

The problem, of course, is that the picture Musk paints is a pretty massive misreading of the simulation hypothesis, and a total misinterpretation of Bostrom’s argument. What Bostrom really said is far more interesting (and far more troubling) than what the clickbait headlines have suggested.

What if we’re all, like, pawns, in some giant alien’s game?

Nick Bostrom. Photo courtesy of the Future of Humanity Institute. Creative Commons license.

Nick Bostrom. Photo courtesy of the Future of Humanity Institute. Creative Commons license.

First, Bostrom's paper did not say that we almost certainly live in a simulation. He said that there are three possibilities, and that at least one of them is very likely true. They are:

  1. The human species is very likely to go extinct before reaching a “posthuman” stage (which is to say a stage where we could run high-fidelity simulations);

  2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

  3. We are almost certainly living in a computer simulation.

Bostrom has suggested elsewhere that he thinks each is probably equally likely, but that’s not something he fixates on in the paper, as it’s not something you can really prove.

The third possibility gets the most play, and it goes like this: If it’s possible to run simulations that are indistinguishable from reality, then a human within the simulation wouldn’t be able to tell the difference. It’s even possible humans within those simulated realities are also running simulations, and humans in those are running simulations, and so on. 

Given how many simulations might be running within simulations (it could get to a near-infinite number), then most sentient humans would actually be simulations, and not living breathing organic beings.

There’s a sort of dizzying but coherent logic to this thought. It’s an idea that has an almost religious feel to it: there’s especially comfort in the idea that we are currently living in an alternate history where Donald Trump was elected president because some jackass future human thought, “Let’s see what happens if we make this bullshit happen." In a world where a supernatural interventionist god is becoming less and less believable to a lot of people, a computer simulation feels like a rational, understandable alternative to a sentient Creator who’s crashing stars together and lighting bushes on fire.

The thing is, you have to get over one very controversial hurdle to even get to the first two points, and then you have to get through possibilities one and two, which is no small task. In fact, to my thinking, point one and point two are far more interesting than the third. So let’s go through it.

A big assumption: Do our brains not matter at all?

At the very beginning of his paper, Bostrom makes an admission: he’s assuming what philosophers call “substrate independence.” This is why everyone hates philosophy. It’s an overly academic way of saying that consciousness can be simulated without actual, physical brains; that our software doesn’t require our hardware. 

This is not a given. No one has really figured out what consciousness is yet. It’s one of the core mysteries of philosophy: what the hell are we? What is our experience of the world? And can it be replicated? If we build an artificial intelligence that can do all of the mental processes that we can do, it may behave as if it’s conscious, but is it?

This is what Descartes meant when he was saying, “I think, therefore I am.” When it comes down to it, the only thing you can know for certain is that you exist. You can trust other humans when they say that they have thoughts, feelings, and deep internal lives, but when it comes down to it, you can never know. You can only trust. Would you be able to trust the same coming from an intelligent computer?

Say you’re running a simulation of a human life, and you have that simulation walk out into the rain. Does the simulation feel that raindrops hitting its face? Does it feel its clothes getting wet? Or does the program running it just say, “I feel the rain on my face”?

matrixrain.gif

If you believe in a soul or some sort of ghost in the machine, this is an impossible hurdle to overcome. But even if you don’t, it’s still not a done deal. There may be some hitherto undiscovered feature of the brain that gives rise to what we experience as consciousness. It may be a feature that a computer program could not replicate. For you to be a simulation, you who are experiencing actual human consciousness as you read this article, software must be able to replicate consciousness without the actual presence of a human brain. 

It’s not crazy to think that this is possible. But the issue is not settled, and, if we’re being totally honest and are taking a fairly rational skeptical viewpoint, it never will be, just as you can’t be totally sure right now that anyone else besides yourself experiences consciousness. And they have the same bodies and brains as you. It’s much less of a leap, but even then, it’s not a puddle you’re jumping over, it’s a chasm.

You need to make this assumption, this leap of faith, before you even start looking at the rest of the thought experiment. So let’s just make it and move on. We already got this far, so Jehovah starts with an “I,” middle finger to the gods, let’s ride eternal shiny and chrome. We’re doing this.

Point One: Humans might never have the ability to simulate the universe in this much detail

  1. The human species is very likely to go extinct before reaching a “posthuman” stage.

The first part of Bostrom’s trilemma has a few parts to it: first, it might just not be possible to simulate the universe to the degree of detail in which we currently see it. There is an absolutely immense amount of information in our universe, and simulating it would require an unfathomably large amount of computational power.

Our technological advancement over the past few centuries has been impressive, but a study out of Oxford found that, even using quantum computing, you'd basically need a computer bigger than the universe to simulate the universe. And that's just the size, it says nothing about the amount of energy you'd need to power that computer. Even if you tossed away most of the universe and just looked at what we've observed as individuals, the computer required to simulate our experiences would be infeasibly enormous. 

There could be some future innovation or workaround that would solve this problem, but there's no guarantee that there will be. It may just be impossible. Bostrom himself admits that we’re not currently there technologically, and proposes a few workarounds, but they’re all a bit… distant. He suggests we could totally redefine physics with a theory of everything, or that a few thousand years of technological discovery "will make it possible to convert planets and other astronomical resources into enormously powerful computers.”

Those fatal errors are rough, man.

Those fatal errors are rough, man.

That is a big ask. I am as turned on by the idea of a star computer as the rest of you, but that is not coming soon. Which raises the point — can we survive for that long? Even setting aside possible manmade calamities like climate change, nuclear war, superviruses, robot takeovers, nanobot “gray goo” scenarios, or particle accelerators accidentally creating a black hole, we still face the external threats of meteor strikes, the Yellowstone caldera exploding, or, you know, aliens.

We might just not make it to star computers.

The Fermi Paradox

Possibility #1 is, I think, the most likely option in the trilemma. The reason I think this is something called the Fermi Paradox.

Enrico Fermi was an Italian-American physicist, and he noticed that, if you look at the sheer size of the Milky Way Galaxy, it is extremely likely that there is not only a lot of life out there, but that there are a lot of human-level civilizations out there, and potentially a lot of human-level civilizations that have developed the ability to travel in interstellar space (this high probability is based on something called the Drake equation, which I don't have time to get into here, but which you should Google, because it is fascinating.).

Even given the enormous distances from one end of the galaxy to another and the relatively slow pace of interstellar travel, Fermi reasoned, enough time has passed that we should have been contacted by extraterrestrials by this point. He found no convincing evidence that we had, so, he said, "Where is everyone?"

From xkcd.

From xkcd.

One possible explanation is that intelligent life tends to destroy itself shortly after it acquires the ability to travel outside its own solar system. Technological innovation occurs in conjunction with all other human events, and it may be that the type of society that can master the ability to split atoms and launch itself into space must also be an inherently unstable one.

It may be that the technology those civilizations need, like nuclear power, can be weaponized and that this tends to destroy those civilizations before it can fling them out safely and sustainably into space. Or it may be that these civilizations rely, as we do, on an economy based on exponential growth, which inevitably pushes the civilization up against its environmental limits, causing it to collapse.

It would seem that the universe’s silence is speaking volumes.

Bostrom himself is most concerned with superintelligence and how it might destroy us to preserve itself. He literally wrote the book on it. Naturally, as soon as the book came out, Elon Musk started talking about how superintelligence is potentially more dangerous than nuclear war, which is like saying the hungry tiger five feet in front of you is less dangerous than the aliens from Independence Day which, while technically true, is contextually stupid. Whatever. We don’t have time for it. Moving on.

Putting "should" before "can."

2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).

This possibility is, to me, the most interesting -- why, we might wonder, would a civilization not choose to use this power? I know for sure that if I had this power, I would play with it in all sorts of ways — I’d want to see what would happen if you made Africa the continent with guns, germs and steel, I’d want to see what would happen if you had Hitler hit by a horse and buggy when he was five, I’d want to see what would happen if Donald Trump had showed up at the third presidential debate with pinkeye. 

And that’s just me — history nerds everywhere would lose their shit by moving a bullet a few inches to the left at the Battle of Trafalgar, saving Admiral Nelson, or by keeping Salieri away from Mozart, or by making the smoke and mirrors a bit more conspicuous during Jesus’s miracles. Fantasy nerds would introduce dragons, horror nerds would introduce zombies. Thirsty housewives would create a world of only Colin Firths, bros would fill the world with pornstars, and 5-year-olds would push the meteor onto a different orbit and keep the dinos alive.

We, as a species, absolutely lack the restraint to not do this. So what future version of us wouldn’t? 

The first possibility is that it would take a lot of energy, and we wouldn’t want to spend it on dicking around with alternate timelines. We’d need it for our important jobs of crashing big stars into bigger stars, or for developing a test that determines who is a human and who is a Cylon. In Bostrom’s argument, the computers he’s proposing are the size of planets. Maybe if we can pull shit like that off, we’re not that interested in alternative histories anymore.

But the second possibility is the one I like the most: my suspicion is that, for a civilization to advance to such a point of having that power, it would need to learn to draw a thick line between "can" and "should."

Scientists over the past few centuries have made enormous strides, and many of these strides have noticeably improved our lives. But at the same time, a lot of these new technologies have put the human race at serious risk. We’ve already gone through most of these so far, so no need to rehash — but might a civilization that survives be a civilization that learns a bit more discretion? Might a sustainable, long-lasting civilization be one that learns how to do without certain luxuries, and to just take certain parts of life as they come?

Our civilization has very obviously not learned this lesson yet, and it seems increasingly unlikely that we are simply going to science our way out of our problems.

Thought experiments and reality

A final note to make here — throughout time, certainly long before the invention of computers, humans have played around with some form of this thought experiment. In a more superstitious era it took the form of, “Is all we see just a demon playing tricks on us?” In a more Frankensteinian era, it was “Are we just a brain dreaming in a mad scientists jar?”

No matter which version you choose, the results are always somewhat similar, but they are invariably fun to play around with. That is the point of thought experiments. They push you into corners and make you think your way out. But there is always a problem with mistaking a thought experiment for reality: thought experiments were usually conjured up by smartass philosophers to make some sort of broader point about the nature of reality and of being human. 

And that’s the issue: reality does not exist to prove your point. It is far more complex and weird than we are capable of grasping. To take a thought experiment that is basically saying, “Things may not be as they seem,” and to say, “Well, the thought experiment is how it really is, then,” is to miss the point entirely.

Billionaires will always have the need to make themselves sound edgy and cool, especially if their billions are contingent on getting Venture Capital and government money thrown at them constantly, but if something sounds implausibly weird (and, you know, a bit like something you’d talk about while stoned), it’s worth examining a bit more closely.

Yes. We could be living in a simulation. We could also be on the brink of extinction. We could be on the cusp of learning something important about responsibility and restraint. All three are possible futures laid out in front of us. The point of the trilemma is to ask: which do we want? And how do we get there?


If this topic is interesting to you, the science podcast The Infinite Monkey Cage did an excellent panel on it featuring Bostrom, neuroscientist Anil Seth, physicist Brian Cox, and the comedians Robin Ince and Phil Jupitus. You can find it here.

Featured photo by Steve Jurvetson.