You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

solipsist comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion

25 Post author: RobbBB 26 November 2014 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: solipsist 26 November 2014 07:52:24PM 1 point [-]

Yeah, I follow. I'll bring up another wrinkle (which you may already be familiar with): Suppose the objective you're maximizing never equals or exceeds 20. You can reach to 19.994, 19.9999993, 19.9999999999999995, but never actually reach 20. Then even though your objective function is bounded, you will still try to optimize forever, and may resort to increasingly desperate measures to eek out another .000000000000000000000000001.

Comment author: Unknowns 26 November 2014 07:58:10PM -2 points [-]

Yes, this would happen if you take an unbounded function and simply map it to a bounded function without actually changing it. That is why I am suggesting admitting that you really don't have an infinite capacity for caring, and describing what you care about as though you did care infinitely is mistaken, whether you describe this with an unbounded or with a bounded function. This requires admitting that scope insensitivity, after a certain point, is not a bias, but just an objective fact that at a certain point you really don't care anymore.