In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
If preliminary results on the poll hold, then that would be pretty in line with my hypothesis of most people preferring creating simulations with no suffering over a world like ours. However, it is pretty important to note that this might not be representative of human values in general, because looking at your Twitter account, your audience comes mostly from a very specific circles of people (those interested in futurism and AI).
I was mostly trying to approach the problem from a slightly different angle. I wasn't meant to suggest that memories about intense suffering are themselves intense.
As far as I understand it, your hypothesis was that Friendly AI temporarily turns people into p-zombies during moments of intense suffering. So, it seems that someone experiencing intense suffering while conscious (p-zombies aren't conscious) would count as evidence against it.
Reports of conscious intense suffering are abundant. Pain from endometriosis (a condition that affects 10% of women in the world) has been so brutal that it made completely unrelated women tell the internet that their pain was so bad they wanted to die (here and here).
If moments of intense suffering were replaced by p-zombies, then these women would've just suddenly lost consciousness and wouldn't have told the internet about their experience.
From their perspective, it would've look like this: as the condition progresses, the pain gets worse, and at some point, they lose consciousness, only to regain it when everything is already over. They wouldn't have experienced the intense pain that they reported to have experienced. Ditto for all PoWs who have experienced torture.
That's a totally valid view as far as axiological views go, but for us to be in your proposed simulation, the Friendly AI must also share it. After all, we are imagining a situation where it goes on to perform a complicated scheme that depends on a lot of controversial assumptions. To me, that suggests that AI has so many resources that it wouldn't feel bad about one of the assumptions turning out to be false and losing all the invested resources. If the AI has that many resources, I think it isn't unreasonable to ask why it didn't prevent suffering that is not intense (at least in a way I think you are using the word) but is still very bad, like breaking an arm or having a hard dental procedure without anesthesia.
This Friendly AI would have a very peculiar value system. It is utilitarian, but it has a very specific view of suffering, where suffering basically doesn't count for much below a certain threshold. It is seemingly rational (Friendly AI that managed to get its hand on so many resources should possess at least some level of rationality), but chooses to go for a highly risky and relatively costly plan of Ressurection Simulation over just creating simulations that are maximally efficient at converting resources into value.
There is another somewhat related issue. Imagine a population of Friendly AIs that consist of two different versions of Friendly AI, both of which really like the idea of simulations.
Type A: AIs that would opt for Ressurection Simulation.
Type B: AIs that would opt for simulations that are maximally efficient at converting resources into value.
Given the unnecessary complexity of our world (all of the empty space, quantum mechanics, etc), it seems fair to say that Type B AIs would be able to simulate more humans, because they would have more resources left for this task (Type A AIs are spending some amount of their resources on the aforementioned complexity). Given plausible anthropics and assuming that the number of Type A AIs is equal to the number of Type B AIs in our population, we would expect ourselves to be in a simulation by Type B AI (but we are, unfortunately, not).
For us to be in a Ressurection Simulation (just focusing on these two types of value systems a future Friendly AI might have), there would have to be more Type A AIs than Type B AIs. And I think this fact is going to be very hard to prove. And this isn't me being nitpicky; Type B AI is genuinely much closer to my personal value system than Type A AI.
I don't think the simulations that you described are technically impossible. I am not even necessarily against simulations in general. I just think that, given observable evidence, we are not that likely to be in either of the simulations that you have described.