In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
We have to infer how reality works somehow.
I've been poking at the philosophy of math recently. It really seems like there's no way to conceive of a universe that is beyond the reach of logic except one that also can't support life. Classic posts include unreasonable effectiveness of mathematics, what numbers could not be, a few others. So then we need epistemology.
We can make all sorts of wacky nested simulations and any interesting ones, ones that can support organisms (that is, ones that are Turing complete), can also support processes for predicting outcomes in that universe, and those processes appear to necessarily need to do reasoning about what is "simple" in some sense in order to work. So that seems to hint that algorithmic information theory isn't crazy (unless I just hand waved over a dependency loop, which I totally might have done, it's midnight), which means that we can use the equivalence of Turing complete structures to assume we can infer things about the universe. Maybe not solononoff induction, but some form of empirical induction. And then we've justified ordinary reasoning about what's simple.
Okay, so we can reason normally about simplicity. What universes produce observers like us and arise from mathematically simple rules? Lots of them, but it seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other. Or, they produce us by base physics, and then we get instantiated again later to figure out what we did. Ancestor sims require very good outcomes which seem rare, so those branches are lower measure anyway, but also ancestor sims don't get to produce super ai separate from the original causal influence.
Point is, no, what's going on in the simulations is nearly entirely irrelevant. We're in base physics somewhere. Get your head out of the simulation clouds and choose what you do in base physics, not based on how it affects your simulators' opinion of the simulation's moral valence. Leave that sort of crazy stuff to friendly ai, you can't understand superintelligent simulators which we can't even get evidence exist besides plausible but very galaxy brain abstract arguments.
(Oh, might be relevant that I'm a halfer when making predictions, thirder when choosing actions - see anthropic decision theory for an intuition on that.)