You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

orthonormal comments on Life Extension versus Replacement - Less Wrong Discussion

13 Post author: Julia_Galef 30 November 2011 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 03 December 2011 12:07:11AM 4 points [-]

At present, there aren't any truly intermediate cases, so "agents with an identity over time" are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, "rube" becomes a useful concept.

In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of "agents" in order to be successful. I'm not talking about agents being fundamental- they aren't- just that they're tremendously useful components of certain approximations, like the wings of the airplane in a simulator.

Even if a concept isn't fundamental, that doesn't mean you should exclude it from every model. Check instead to see whether it pays rent.

Comment author: DanielLC 03 December 2011 01:10:31AM -2 points [-]

My point isn't that it's a useless concept. It's that it would be silly to consider it morally important.

Comment author: Vladimir_Nesov 03 December 2011 01:15:34PM 4 points [-]

You argued that a concept "isn't fundamental", because in principle it's possible to construct things gradually escaping the current natural category, and therefore it's morally unimportant. Can you give an example of a morally important category?

Comment author: orthonormal 03 December 2011 01:47:06AM 2 points [-]

Sorry, but my moral valuations aren't up for grabs. I'm not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.

But at this point, I think we've passed the productive stage of this particular discussion.