You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong Discussion

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 17 February 2015 04:52:21PM 2 points [-]

The 'if we were smarter, thought clearer, etc. etc.' seems to be asking it to go beyond us.

What else do you mean by 'growing up', and why should we value it if it isn't something we'd approve of?

Comment author: PhilGoetz 20 February 2015 02:50:33AM *  1 point [-]

There isn't a clear distinction, but CEV is exactly what the Amish have done. They took the values they had in the 18th century, tried to figure out what the minimal, essential values behind them were, and then developed a system for using those core values to extrapolate the Amish position on new developments, like electricity, the telephone, gasoline engines, the Internet, etc. It isn't a simple rejection of new things; they have an eclectic selection of new things that may be used in certain ways or for certain purposes.

Comment author: Luke_A_Somers 20 February 2015 05:48:42PM 1 point [-]

This is an interesting clarification of your early point, but I don't see how this is a response to what I said.

For one thing, you're ignoring the 'if we were smarter, thought clearer' part since of course the Amish can't do that since they're human.

But really, you just gave one negative example. Okay, being Amish is not growing up. What is growing up, and why would we predictably not value it while also finding it proper to object to its being not valued?

Comment author: PhilGoetz 20 February 2015 07:08:47PM *  1 point [-]

When you let your kids grow up, you accept that they won't do things the way you want them to. They will have other values. You don't try to optimize them for your value system.

Retaining values is one thing. FAI / CEV is designed to maximize a utility function based on your values. It corresponds to brainwashing your kids to have all of your values and stay as close to your value system as possible. Increasing smartness is beside the point.

Comment author: Luke_A_Somers 21 February 2015 03:36:46AM *  1 point [-]

If we value them getting to go and make their own choices, then that will be included in CEV.

If we do not value them being brainwashed, it will not be included in CEV.

I strongly suspect that both of these are the case.

Comment author: PhilGoetz 23 February 2015 06:33:19AM *  2 points [-]

I know that is the standard answer. I tried to discourage people from making it by saying, in the parent comment,

I know somebody's going to say, "Well, then that's your utility function!"

I'm talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it's zero.

I don't think that parameter, the degree of freedom, should be thought of as a value, which we can plug any number we like into. It should be thought of as a parameter of the system, which has a predictable impact on the efficacy of the CEV system regardless of what values it is implementing.

I don't think people allow their children freedom to make up their own minds because they value them doing so. They do it because we have centuries of experience showing that zero-freedom CEV doesn't work. The oft-attempted process of getting kids to hold the same values as their parents, just modified for the new environment, always turns out badly.

Comment author: Luke_A_Somers 23 February 2015 11:47:52AM *  1 point [-]

I'm talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it's zero.

No, it's not.

Zero is the number of degrees of freedom in the AI's utility function. not the next generation's utility functions.

Comment author: PhilGoetz 11 March 2015 04:37:25PM 0 points [-]

When using the parent-child relationship as an instance of CEV, it is. The child takes the position of the AI.

Comment author: Luke_A_Somers 11 March 2015 05:28:28PM 1 point [-]

You've completely lost me. Do you mean, this AI is our child? Do you mean that the way we will have children in a more conventional sense will be an instance of CEV?

If the former, I don't see a moral problem. A singleton doesn't get to be a person, even if it contains multitudes (much as the USA does not to get to be a person, though I would hope a singleton would function better).

If the latter... words fail me, at least for the moment, and I will wait for your confirmation before trying again.