You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Baughn comments on an ethical puzzle about brain emulation - Less Wrong Discussion

14 Post author: asr 13 December 2013 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: Baughn 13 December 2013 10:45:23PM *  5 points [-]

I would say that, by the time you get to C, there probably isn't any problem anymore. You're not actually computing the torture; or, rather, you already did that.

Scenario C is actually this:

You scan John Smith's brain, run a detailed simulation of his being tortured while streaming the intermediate stages to disk, and then stream the disk state back to memory (for no good reason).

There is torture there, to be sure; it's in the "detailed simulation" step. I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness. Scenario D and E are the same. Now, if you manage to construct scenario B in a homomorphic encryption system, then I'd have to admit to some real uncertainty.

Comment author: Pentashagon 15 December 2013 07:51:56AM 1 point [-]

Now, if you manage to construct scenario B in a homomorphic encryption system, then I'd have to admit to some real uncertainty.

I don't think that's different even if we threw away the private key before beginning the simulation. It's akin to sending spaceships beyond the observable edge of the universe or otherwise hiding parts of reality from ourselves. In fact, I think it may be beneficial to live in a homomorphically encrypted environment that is essentially immune to outside manipulation. It could be made to either work flawlessly or acquire near-maximum entropy at every time step with very high probability and with nearly as much measure in the "works flawlessly" region as a traditional simulation.

Comment author: passive_fist 15 December 2013 06:34:09AM 1 point [-]

I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness.

That's the key observation here, I think. There's a good case to be made that scenario B has consciousness. But does scenario C have it? It's not so obvious anymore.