Eugine_Nier comments on Moral Error and Moral Disagreement - Less Wrong

14 Post author: Eliezer_Yudkowsky 10 August 2008 11:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 18 November 2010 07:53:37AM 11 points [-]

The question why anyone would ever sincerely want to build an AI which extrapolates anything other than their personal volition is still unclear to me. It hinges on the definition of "sincerely want". If Eliezer can task the AI with looking at humanity and inferring its best wishes, why can't he task it with looking at himself and inferring his best idea of how to infer humanity's wishes?

This has been my thought exactly. Barring all but the most explicit convolution any given person would prefer their own personal volition to be extrapolated. If by happenstance I should be altruistically and perfectly infatuated by, say Sally, then that's the FAI's problem. It will turn out that extrapolating my volition will then entail extrapolating Sally's volition. The same applies to caring about 'humanity', whatever that fuzzy concept means when taken in the context of unbounded future potential.

I am also not sure how to handle those who profess an ultimate preference for a possible AI that extrapolates other than their own volition. I mean, clearly they are either lying, crazy or naive. It seems safer to trust someone who says "I would ultimately prefer FAI<someone> but I am creating FAI<larger group including wedrifid> for the purpose of effective cooperation."

Similarly, if someone wanted to credibly signal altruism to me it would be better to try to convince me that CEV<someone> has a lot of similarities with CEV<benefactor> that arise due to altruistic desires rather than saying that they truly sincerely prefer CEV<someone, benefactor>. Because the later is clearly bullshit of some sort.

How do we determine, in general, which things a document like CEV must spell out, and which things can/should be left to the mysterious magic of "intelligence"?

I have no idea, I'm afraid.

Comment author: Eugine_Nier 18 November 2010 08:29:45AM 8 points [-]

Eliezer appears to be asserting that CEV<someone> is equal for all humans. His arguments leave something to be desired. In particular, this is an assertion about human psychology, and requires evidence that is entangled with reality.

Leaving aside the question of whether even a single human's volition can be extrapolated into a unique coherent utility function, this assertion has two major components:

1) humans are sufficiently altruistic that say CEV<Alice> doesn't in any way favor Alice over Bob.

2) humans are sufficiently similar that any apparent moral disagreement between Alice and Bob is caused by one or both having false beliefs about the physical world.

I find both these statements dubious, especially the first, since I see on reason why evolution would make us that altruistic.

Comment author: Perplexed 18 November 2010 06:49:40PM 1 point [-]

Eliezer appears to be asserting that CEV<someone> is equal for all humans.

The phrase "is equal for all humans" is ambiguous. Even if all humans had identical psychologies, that could still all be selfish. The scare-quoted "source code" for Values<Eliezer> and Values<Archimedes> might be identical, but I think that both will involve self "pointers" resolving to Eliezer in one case and to Archimedes in the other.

We can define that two persons values are "parametrically identical" if they can be expressed in the same "source code", but the code contains one or more parameters which are interpreted differently for different persons. A self pointer is one obvious parameter that we might be prepared to permit in "coherent" human values. That people are somewhat selfish does not necessarily conflict with our goal of determining a fair composite CEV of mankind - there are obvious ways of combining selfish values into composite values by giving "equal weight" (more scare quotes) to the values of each person.

The question then arises, are there other parameters we should expect besides self? I believe there are. One of them can be called the now pointer - it designates the current point in time. The now pointer in Values<Archimedes> resolves to ~150 BC whereas Values<Eliezer> resolves to ~2010 AD. Both are allowed to be more interested in the present and immediate future than in the distant future. (Whether they should be interested at all in the recent past is an interesting question, but somewhat orthogonal to the present topic.)

How do we combine now pointers of different persons when constructing a CEV for mankind. Do we do it by assigning "equal weights" to the now of each person as we did for the self pointers? I believe this would be a mistake. What we really want, I believe, is a weighting scheme which changes over time - a system of exponential discounting. Actions taken by an FAI in the year 2100 should mostly be for the satisfaction of the desires of people alive in 2100. The FAI will give some consideration in 2100 to the situation in 2110 because the people around in 2100 will also be interested in 2110 to some extent. It will (in 2100) give less consideration to the prospects in 2200, because people in 2100 will be not that interested in 2200. "After all", they will rationally say to themselves, "we will be paying the year 2200 its due attention in 2180, and 2190, and especially 2199. Let the future care for itself. It certainly isn't going to care for us!"

There are various other parameters that may appear in the idealized common "source code" for Values<person>. For example, there may be different preferences regarding the discount rate used in the previous paragraph, and there may be different preferences regarding the "Malthusian factor" - how many biological descendents or clones one accumulates and how fast. It is not obvious to me whether we need to come up with rules for combining these into a CEV or whether the composite versions of these parameters fall out automatically from the rules for combining self and now parameters.

Sorry for the long response, but your comment inspired me.

Comment author: timtyler 18 November 2010 11:55:19PM *  -1 points [-]

What we really want, I believe, is a weighting scheme which changes over time - a system of exponential discounting. Actions taken by an FAI in the year 2100 should mostly be for the satisfaction of the desires of people alive in 2100. The FAI will give some consideration in 2100 to the situation in 2110 because the people around in 2100 will also be interested in 2110 to some extent. It will (in 2100) give less consideration to the prospects in 2200, because people in 2100 will be not that interested in 2200. "After all", they will rationally say to themselves, "we will be paying the year 2200 its due attention in 2180, and 2190, and especially 2199.

I don't think you need a "discounting" scheme. Or at least, you would get what is needed there "automatically" - if you just maximise expected utility. The same way Deep Blue doesn't waste its time worrying about promoting pawns on the first move of the game - even if you give it the very long term (and not remotely "discounted") goal of winning the whole game.

Comment author: Perplexed 19 November 2010 12:18:47AM 1 point [-]

I don't think you need a "discounting" scheme. Or at least, you would get what is needed there "automatically" - if you just maximise expected utility.

Could you explain why you say that? I can imagine two possible reasons why you might, but they are both wrong. Your "Deep Blue" example suggests that you are laboring under some profound misconceptions about utility theory and the nature of instrumental values.

Comment author: timtyler 19 November 2010 08:04:19AM *  -1 points [-]

This is this one again. You don't yet seem to agree with it - and it isn't clear to me why not.

Comment author: Perplexed 19 November 2010 04:47:47PM 0 points [-]

Nor is it clear to me why you did not respond to my question / request for clarification.

Comment author: timtyler 19 November 2010 08:26:40PM 1 point [-]

I did respond. I didn't have an essay on the topic prepared - but Yu-El did, so I linked to that.

If you want to hear it in my own words:

Wiring in temporal discounting is usually bad - since the machine can usually figure out what temporal discounting is appropriate for its current circumstances and abilities much better than you can. It is the same as with any other type of proximate goal.

Instead you are usually best off just telling the machine your preferences about the possible states of the universe.

If you are thinking you want the machine to mirror your own preferences, then I recommend that you consider carefully whether your ultimate preferences include temporal discounting - or whether all that is just instrumental.

Comment author: Perplexed 20 November 2010 12:53:22AM *  1 point [-]

I did respond.

I don't see how. My question was:

Could you explain why you say that?

Referring to this that you said:

Or at least, you would get what is needed there [instead of discounting] "automatically" - if you just maximise expected utility.

You have still not explained why you said this. The question that discounting answers is, "Which is better: saving 3 lives today or saving 4 lives in 50 years?" Which is the same question as "Which of the two has the higher expected utility in current utilons?" We want to maximize expected current utility regardless of what we decide regarding discounting.

However, since you do bring up the idea of maximizing expected utility, I am very curious how you can simultaneously claim (elsewhere on this thread) that utilities are figures of merit attached to actions rather than outcomes. Are you suggesting that we should be assessing our probability distribution over actions and then adding together the products of those probabilities with the utility of each action?

Comment author: timtyler 20 November 2010 08:56:24AM *  1 point [-]

Many factors "automatically" lead to temporal discounting if you don't wire it in. The list includes:

  • Agents are mortal - they might die before the future utility arrives
  • Agents exhibit senescence - the present is more valuable to them than the future, because they are younger and more vital;
  • The future is uncertain - agents have limited capacities to predict the future;
  • The future is hard to predicably influence by actions taken now;

I think considerations such as the ones listed above adequately account for most temporal discounting in biology - though it is true that some of it may be the result of adaptations to deal with resource-limited cognition, or just plain stupidity.

Note that the list is dominated by items that are a function of the capabilities and limitations of the agent in question. If the agent conquers senescence, becomes immortal, or improves its ability to predict or predictably influence the future, then the factors all change around. This naturally results in a different temporal discounting scheme - so long as it has not previously been wired into the agent by myopic forces.

Basically, temporal discounting can often usefully be regarded as instrumental. Like energy, or gold, or warmth. You could specify how much each of these things is valued as well - but if you don't they will be assigned instrumental value anyway. Unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it. Tell the agent what state of affairs you actually want - and let it figure out the details of how best to get it for you.

Temporal discounting contrasts with risk aversion in this respect.

Comment author: timtyler 20 November 2010 09:19:59AM *  1 point [-]

Regarding utility, utilities are just measures of satisfaction. They can be associated with anything.

It is a matter of fact that utilities are associated with actions in most agents - since agents have evolved to calculate utilities in order to allow them to choose between their possible actions.

I am not claiming that utilities are not frequently associated with outcomes. Utilities are frequently linked to outcomes - since most evolved agents are made so in such a way that they like to derive satisfaction by manipulating the external world.

However, nowhere in the definition of utility does it say that utilities are necessarily associated with external-world outcomes. Indeed, in the well-known phenomena of "wireheading" and "drug-taking" utility is divorced from external-world outcomes - and deliberately manufactured.

Comment author: Jack 19 November 2010 12:31:36PM 0 points [-]

The same way Deep Blue doesn't waste its time worrying about promoting pawns on the first move of the game - even if you give it the very long term (and not remotely "discounted") goal of winning the whole game.

Is this really true? My understanding is that Deep Blue's position evaluation function was determined by an analysis of a hundreds of thousands of games. Presumably it ranked openings which had a tendency to produce more promotion opportunities higher than openings which tended to produce fewer promotion opportunities (all else being equal and assuming promoting pawns correlates with wins).

Comment author: timtyler 19 November 2010 08:40:45PM *  0 points [-]

I wasn't talking about that - I meant it doesn't evaluate board positions with promoted pawns at the start of the game - even though these are common positions in complete chess games. Anyway, forget that example if you don't like it, the point it illustrates is unchanged.

Comment author: timtyler 20 November 2010 10:16:04AM 1 point [-]

Eliezer appears to be asserting that CEV<someone> is equal for all humans.

The "C" in "CEV" stands for "Coherent". The concept refers to techniques of combining the wills of a bunch of agents. The idea is not normally applied to a population consisting of single human. That would just be EV<someone>. I am not aware of any evidence that Yu-El thinks that EV<someone> is independent of the <someone>.