Vladimir_Nesov comments on An Xtranormal Intelligence Explosion - Less Wrong

4 Post author: James_Miller 07 November 2010 11:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 08 November 2010 06:52:15PM *  4 points [-]

It it really so insane to think that we could instill the same respect-for-the-authentic-but-less-than-perfect in a machine that we create?

We could. But should we? (And how is it even relevant to your original comment? This seems to be a separate argument for roughly the same conclusion. What about the original argument? Do you argree it's flawed (that is AI can in fact out-native the natives)?)

See also discussion of Waser's post, in particular second paragraph of my comment here:

If you consider a single top-level goal, then disclaimers about subgoals are unnecessary. Instead of saying "Don't overly optimize any given subgoal (at the expense of the other subgoals)", just say "Optimize the top-level goal". This is simpler and tells you what to do, as opposed to what not to do, with the latter suffering from all the problems of nonapples.

Comment deleted 08 November 2010 07:06:56PM *  [-]
Comment author: Vladimir_Nesov 08 November 2010 07:12:09PM *  0 points [-]

You don't want to elevate not optimizing something too much as a goal (and it's difficult to say what that would mean), while just working on optimizing the top-level goal unpacks this impulse as appropriate. Authenticity could be an instrumental goal, but is of little relevance when we discuss values or decision-making in sufficiently general context (i.e. not specifically the environments where we have revealed preference for authenticity despite it not being a component of top-level goal).

Comment deleted 08 November 2010 07:19:20PM *  [-]
Comment author: Vladimir_Nesov 08 November 2010 07:32:55PM *  0 points [-]

you don't want to elevate not optimizing something too much as a goal (and it's difficult to say what that would mean), while just working on optimizing the top-level goal unpacks this impulse as appropriate.

For example, do I parse it as "to elevate not optimizing something too much" or as "don't want ... too much". And what impulse is "this impulse"?

There is valid intuition ("impulse") that in certain contexts, some sub-goals, such as "replace old buildings with better new ones" shouldn't be given too much power, as that would lead to bad consequences according to other aspects of their evaluation (e.g. we lose an architectural masterpiece).

To unpack, or cash out an intuition means to create a more explicit model of the reasons behind its validity (to the extent it's valid). Modeling the above intuition as "optimizing too strongly is undesirable" is incorrect, and so one shouldn't embrace this principle of not optimizing things too much with high-priority ("elevate").

Instead, just trying to figure out what top-level goal asks for, and optimizing for the overall top-level goal without ever forgetting what it is, is the way to go. Acting exclusively for top-level goal explains the intuition as well: if you optimize a given sub-goal too much, it probably indicates that you forgot the overall goal, working on something different instead, and that shouldn't be done.

Comment author: DaveX 08 November 2010 08:04:37PM 0 points [-]

Conflicts between subgoals indicate premature fixation on alternative solutions. The alternatives shouldn't be prioritized as goals in and of themselves. The other aspects of their evaluation would fit better as goals or subgoals to be optimized. A goal should give you guidance for choosing between alternatives.

In your example, one might ask what goal can one optimize to help make good decisions between policies like "replace old buildings with better ones" and "don't lose architectural masterpieces"?

Comment deleted 08 November 2010 07:50:23PM *  [-]
Comment author: Vladimir_Nesov 08 November 2010 08:13:06PM 1 point [-]

If you keep stuff in a museum, instead of using its atoms for something else, you are in effect avoiding optimization of that stuff. There could be a valid reason for that (the stuff in the museum remaining where it is happens to be optimal in context), or a wrong one (preserving stuff is valuable in itself).

One idea similar to what I guess you are talking about which I believe to hold some water is sympathy/altruism. If human values are such that we value well-being of sufficiently human-like persons, then any such person will receive a comparatively huge chunk of resources from a rich human-valued agent, compared to what it'd get only for game-theoretic reasons (where one option is to get disassembled if you are weak), for use according to their own values that are different from our agent's. This possibly could be made real, although it's rather sketchy at this point.

Meta:

I am puzzled by many things here. One is how we two managed to make this thread so incoherent.

Of the events I did understand, there was one miscommunication, my fault for not making my reference clearer. It's now edited out. Other questions are still open.

Ah. I get it now. My phrase "respect for the authentic but less than perfect". You saw it as an intuition in favor of not "overdoing" the optimizing. Believe me. It wasn't.

I can't believe what I don't understand.

Comment author: Perplexed 08 November 2010 08:20:07PM 1 point [-]

I can't believe what I don't understand.

And I should stop responding to comments that I don't understand. Sorry we wasted each other's time here.

Comment author: Vladimir_Nesov 08 November 2010 08:21:36PM 4 points [-]

And I should stop responding to comments that I don't understand.

Talking more generally improves understanding.

Comment author: Perplexed 08 November 2010 08:33:11PM 1 point [-]

Talking more generally improves understanding.

I find that listening often works better. But it depends on whom you listen to.