I've been trying to wrap my head around arguments involving simulations, e.g. what to do if Skynet (replace with whatever AI you prefer to hate) threatens to torture a large number of simulations of you, etc.
Here is my stupid question: why can't we humans use a similar threat? Why can't I say that, if you don't cooperate with my wishes, I'll torture imaginary versions of you inside my head? It's not like my brain isn't another information processing device, so what is it about my imagination that feels less compelling than Skynet's?
My own thinking about this whole class of questions starts with: is the agent threatening this capable of torturing systems that I prefer (on reflection) not be tortured? If I'm confident they can do so, then they can credibly threaten me.
Among other things, this formulation lets me completely ignore whether Skynet's simulation of me is actually me. That's irrelevant to the question at hand. In fact, whether it's even a simulation of me, and indeed whether it's a person at all, is irrelevant. What's important is whether I prefer it not be tortured.
A lot...
This is part of a two-week experiment on having more open threads.
Obvious answers aren't always obvious. If you feel silly for not understanding something, you're not alone. Ask a question here.
Previous stupid questions
Other similar threads include: