In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
I think a more meta-argument is valid: it is almost impossible to prove that all possible civilizations will not run simulations despite having all data about us (or being able to generate it from scratch).
Such proof would require listing many assumptions about goal systems and ethics, and proving that under any plausible combination of ethics and goals, it is either unlikely or immoral. This is a monumental task that can be disproven by just one example.
I also polled people in my social network, and 70 percent said they would want to create a simulation with sentient beings. The creation of simulations is a powerful human value.
More generally, I think human life is good overall, so having one more century of human existence is good, and negative utilitarianism is false.
However, I am against repeating intense suffering in simulations, and I think this can be addressed by blinding people's feelings during extreme suffering (temporarily turning them into p-zombies). Since I am not in intense suffering now, I could still be in a simulation.
Now to your counterarguments:
1. Here again, people who would prefer never to be simulated can be predicted in advance and turned into p-zombies.
2. While a measure war is unlikely, it by definition generates so much measure that we could be in it. It also solves s-risks, so it's not a bad idea.
3. Curing past suffering is based on complex reassortment of observer-moments, details of which I would not discuss here. Consider that every moment in pain will be compensated by 100 years in bliss, which is good from a utilitarian view.
4. It is actually very cost-effective to run a simulation of a problem you want to solve if you have a lot of computing power.