In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument's conclusion or discard that assumption. I'm open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:
I agree with this statement, and that's my fear. However, you don't seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: "The simulation argument should increase our credence that our entire understanding of everything is flawed." Yet somehow, that doesn't frighten you. What do you see that I don't see? Practical concerns don't change the territory outside our false world.
Second:
That's surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here's a possible hypothetical:
Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there's a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.
In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.
I'm sure you can craft even more plausible stories.
My point is, even if our understanding of physics and logic is correct, I don't see why we ought to privilege the hypothesis that simulations are memories. I also don't see why we ought to privilege the idea that it's in our interest to increase utility within the simulation. Can you please clarify why you're so confident about these notions?
Thank you