You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

kokotajlod comments on an ethical puzzle about brain emulation - Less Wrong Discussion

14 Post author: asr 13 December 2013 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: kokotajlod 17 December 2013 04:41:44AM 2 points [-]

I've been thinking a lot about this issue (and the broader issue that this is a special case of) recently. My two cents:

Under most views, this isn't just an ethical problem. It can be reformulated as a problem about what we ought to expect. Suppose you are John Smith. Do you anticipate different experiences depending on how far down the sequence your enemies will go? This makes the problem more problematic, because while there is nothing wrong with valuing a system less and less as it gets less and less biological and more and more encrypted, there is something strange about thinking that a system is less and less... of a contributor to your expectations about the future? Perhaps this could be made to make sense, but it would take a bit of work. Alternatively, we could reject the notion of expectations and use some different model entirely. This "kicking away the ladder" approach raises worries of its own though.

I think the problem generalizes even further, actually. Like others have said, this is basically one facet of an issue that includes terms like "dust theory" and "computationalism."

Personally, I'm starting to seriously doubt the computationalist theory of mind I've held since high school. Not sure what else to believe though.

Comment author: asr 17 December 2013 06:57:08AM 1 point [-]

Yes. I picked the ethical formulation as a way to make clear that this isn't just a terminological problem.

I like the framing in terms of expectation.

And I agree that this line of thought makes me skeptical about the computationalist theory of mind. The conventional formulations of computation seem to abstract away enough stuff about identity that you just can't hang a theory of mind and future expectation on what's left.

Comment author: summerstay 17 December 2013 03:09:01PM 0 points [-]

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.