Nornagest comments on [Open Thread] Stupid Questions (2014-02-17) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
I've been trying to wrap my head around arguments involving simulations, e.g. what to do if Skynet (replace with whatever AI you prefer to hate) threatens to torture a large number of simulations of you, etc.
Here is my stupid question: why can't we humans use a similar threat? Why can't I say that, if you don't cooperate with my wishes, I'll torture imaginary versions of you inside my head? It's not like my brain isn't another information processing device, so what is it about my imagination that feels less compelling than Skynet's?
I no longer find it totally implausible that imagined people might, if modeled in enough detail, be in some sense conscious -- it seems unlikely to me that human self-modeling and other-modeling logic would end up being that different -- but even if we take that as given, there's a couple of problems with threatening to imagine someone in some unpleasant situation.
The basic issue is asymmetry of information. You might be able to imagine someone that thinks or even reliably acts like your enemy; but, no matter how good you are at personality modeling, they aren't going to have access to all, or even much, of your enemy's memories and experiences. Lacking that, I wouldn't say your imagined enemy is cognitively equivalent to your real enemy in a way that'd make the threat hold up.
(Skynet, by contrast, might be able to reproduce all that information by some means -- brain scanning, say, or some superhuman form of induction.)