You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong Discussion

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 23 February 2015 11:47:52AM *  1 point [-]

I'm talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it's zero.

No, it's not.

Zero is the number of degrees of freedom in the AI's utility function. not the next generation's utility functions.

Comment author: PhilGoetz 11 March 2015 04:37:25PM 0 points [-]

When using the parent-child relationship as an instance of CEV, it is. The child takes the position of the AI.

Comment author: Luke_A_Somers 11 March 2015 05:28:28PM 1 point [-]

You've completely lost me. Do you mean, this AI is our child? Do you mean that the way we will have children in a more conventional sense will be an instance of CEV?

If the former, I don't see a moral problem. A singleton doesn't get to be a person, even if it contains multitudes (much as the USA does not to get to be a person, though I would hope a singleton would function better).

If the latter... words fail me, at least for the moment, and I will wait for your confirmation before trying again.