In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
I am sorry to butt into your conversation, but I do have some points of disagreement.
I think that's a very high bar to set. It's almost impossible to definitively prove that we are not in a Cartesian demon or brain-in-a-vat scenario. But this doesn't mean that those scenarios are likely. I think it is fair to say that more than a possibility is required to establish that we are living in a simulation.
I think that some clarifications are needed here. How was the question phrased? I expect that some people would be fine with creating simulations of worlds where people experience pure bliss, but not necessarily our world. I would especially expect this if the possibility of "pure bliss" world was explicitly mentioned. Something like "would you want to spend resources to create a simulation of a world like ours (with all of its "ugliness") when you could use them to instead create a world of pure bliss.
Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation? Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation? Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it).
From a utilitarian view, why would simulators opt for Ressurection Simulation? Why not just simulate a world that's maximally efficient at converting computational resources into utility? Our world has quite a bit of suffering (both intense and non-intense), as well as a lot of wasted resources (lots of empty space in our universe, complicated quantum mechanics, etc). It seems very suboptimal from a utilitarian view.
Why would an Unfriendly AI go through the trouble of actually making us conscious? Surely, if we already accept the notion of p-zombies, then an Unfriendly AI could just create simulations full of p-zombies and save a lot of computational power.
But also, there is an interesting question of why this superintelligence would choose to make our world the way it is. Presumably, in the "real world" we have an unfriendly superintelligence (with vast amounts of resources), who wants to avoid dying. Why would it not start the simulations from that moment? Surely, by starting the simulation "earlier" than the current moment in the "real world" it adds a lot of unnecessary noise into the results of its experiment (all of the outcomes that can happen in our simulation but can't happen in the real world).