You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ESRogs comments on an ethical puzzle about brain emulation - Less Wrong Discussion

14 Post author: asr 13 December 2013 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread. Show more comments above.

Comment author: ESRogs 15 December 2013 09:48:36AM 0 points [-]

Could you explain the relevance of the GAZP? I'm not sure I'm following.

Also, would it be fair to characterize your argument as saying that C, D, and E are bad only because they include B as a prerequisite, and that the additional steps beyond just B are innocuous?

Comment author: TheOtherDave 15 December 2013 04:02:52PM 1 point [-]

I think the relevance of the GAZP was supposed to be reasoning along the lines of:
1) Either (A1) consciousness is solely the result of brain-states being computed, or (A2) it involves some kind of epiphenomenal property.
2) The GAZP precludes epiphenomenal properties being responsible for consciousness.
3) Therefore A1.

The difficulty with this reasoning, of course, is that there's a huge excluded middle between A1 and A2.

C, D, and E are bad only because they include B

For my own part I would not quite agree with this, though it's close.
I would agree that if a scenario includes (B,C,D,E) the vast bulk of the badness in that scenario is on account of B.

There might be some badness that follows from (C,D,E) alone... I certainly have a strong intuitive aversion to them, and while I suspect that that preference would not be stable under reflection I'm not strongly confident of that.