In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
... but it's expensive, especially if you have to simulate its environment as well. You have to use a lot of physical resources to run a high-fidelity simulation. It probably takes irreducibly more mass and energy to simulate any given system with close to "full" fidelity than the system itself uses. You can probably get away with less fidelity than that, but nobody has provided any explanation of how much less or why that works.
There are other, more interesting and important ways to use that compute capacity. Nobody sane, human or alien, is going to waste it on running a crapton of simulations.
Also, nobody knows that all the simulated minds wouldn't be p-zombies, because, regardless of innumerable pompous overconfident claims, nobody understands qualia. Nobody can prove that they're not a p-zombie, but do you think you're a p-zombie? And do we care about p-zombies?
If that's true, and you haven't provided any evidence for it, then those aliens have many, many other things to simulate. The measure of humans among random aliens' simulations is going to be tiny if it's not zero.
Again, that doesn't imply that they're going to run enough of them for them to dominate the number of subjective experiences out there, or that any of them will be of humans.
Future humans, or human AI successors, if there are any of either, will probably also run "various simulations", but that doesn't mean they're going to dump the kind of vast resources you're demanding into them.
Um, no? Because all of the premises you're using to get there are wrong.
By that definition, a simulation that bounces frictionless billiard balls around and labels them as humans is "subjectively indistinguishable" from our own, since the billiard balls have no cognition and can't tell anything about anything at all. You need to do more than that to define the kind of simulation you really mean.
I've never understood why people make this argument:
Let's imagine that we crack the minimum requirements for sentience. I think we already may have accidentally done so, but table that for a moment. Will it really require that we simulate the entire hum... (read more)