Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Nick_Tarleton comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: LucasSloan 03 March 2010 06:53:47PM 1 point [-]

The point about the complexity of human value is that any small variation will result in a valueless world. The point is that a randomly chosen utility function, or one derived from some simple task is not going to produce the sort of behavior we want. Or to put it more succinctly, Friendliness doesn't happen without hard work. This doesn't mean that the hardest sub-goal on the way to Friendliness is figuring out what humans want, although Eliezer's current plan is to sidestep that whole issue.

Comment author: Nick_Tarleton 03 March 2010 06:58:10PM 0 points [-]

The point about the complexity of human value is that any small variation will result in a valueless world.

s/is/isn't/ ?

Comment author: LucasSloan 03 March 2010 07:00:35PM 1 point [-]

Fairly small changes would result is boring, valueless futures.

Comment author: Nick_Tarleton 03 March 2010 07:08:48PM *  1 point [-]

Okay, the structure of that sentence and the next ("the point is.... the point is....") made me think you might have made a typo. (I'm still a little confused, since I don't see how small changes are relevant to anything Tim Tyler mentioned.)

I strongly doubt that literally any small change would result in a literally valueless world.

Comment author: Vladimir_Nesov 03 March 2010 10:41:34PM *  0 points [-]

I strongly doubt that literally any small change would result in a literally valueless world.

People who suggest that a given change in preference isn't going to be significant are usually talking about changes that are morally fatal.

Comment author: Nick_Tarleton 03 March 2010 10:47:30PM 0 points [-]

This is probably true; I'm talking about the literal universally quantified statement.

Comment author: JGWeissman 03 March 2010 07:37:59PM 0 points [-]

I would have cited Value is Fragile to support this point.

Comment author: LucasSloan 03 March 2010 07:40:09PM 0 points [-]

That's also good.