In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
I've never understood why people make this argument:
Let's imagine that we crack the minimum requirements for sentience. I think we already may have accidentally done so, but table that for a moment. Will it really require that we simulate the entire human brain down to every last particle, or is it plausible that it will require a bare minimum of mathematical abstractions?
Additionally, we've seen people create basic computers inside of minecraft and little big planet, now let's pretend there was a conscious npc inside one of those games looking at their computer and wondering:
and the other sentient npc says
For all we know all the particles in the universe are as difficult to run as the pixels on-screen during a game of pac-man, and/or perhaps the the quantum observer effect is analogous to an advanced form of view frustum culling.
Why would we ever assume that an outer reality simulating our own has similar computational resource constraints or that resource constraints is even a meaningful concept there?