Perplexed comments on Moral Error and Moral Disagreement - Less Wrong

14 Post author: Eliezer_Yudkowsky 10 August 2008 11:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 18 November 2010 11:55:19PM *  -1 points [-]

What we really want, I believe, is a weighting scheme which changes over time - a system of exponential discounting. Actions taken by an FAI in the year 2100 should mostly be for the satisfaction of the desires of people alive in 2100. The FAI will give some consideration in 2100 to the situation in 2110 because the people around in 2100 will also be interested in 2110 to some extent. It will (in 2100) give less consideration to the prospects in 2200, because people in 2100 will be not that interested in 2200. "After all", they will rationally say to themselves, "we will be paying the year 2200 its due attention in 2180, and 2190, and especially 2199.

I don't think you need a "discounting" scheme. Or at least, you would get what is needed there "automatically" - if you just maximise expected utility. The same way Deep Blue doesn't waste its time worrying about promoting pawns on the first move of the game - even if you give it the very long term (and not remotely "discounted") goal of winning the whole game.

Comment author: Perplexed 19 November 2010 12:18:47AM 1 point [-]

I don't think you need a "discounting" scheme. Or at least, you would get what is needed there "automatically" - if you just maximise expected utility.

Could you explain why you say that? I can imagine two possible reasons why you might, but they are both wrong. Your "Deep Blue" example suggests that you are laboring under some profound misconceptions about utility theory and the nature of instrumental values.

Comment author: timtyler 19 November 2010 08:04:19AM *  -1 points [-]

This is this one again. You don't yet seem to agree with it - and it isn't clear to me why not.

Comment author: Perplexed 19 November 2010 04:47:47PM 0 points [-]

Nor is it clear to me why you did not respond to my question / request for clarification.

Comment author: timtyler 19 November 2010 08:26:40PM 1 point [-]

I did respond. I didn't have an essay on the topic prepared - but Yu-El did, so I linked to that.

If you want to hear it in my own words:

Wiring in temporal discounting is usually bad - since the machine can usually figure out what temporal discounting is appropriate for its current circumstances and abilities much better than you can. It is the same as with any other type of proximate goal.

Instead you are usually best off just telling the machine your preferences about the possible states of the universe.

If you are thinking you want the machine to mirror your own preferences, then I recommend that you consider carefully whether your ultimate preferences include temporal discounting - or whether all that is just instrumental.

Comment author: Perplexed 20 November 2010 12:53:22AM *  1 point [-]

I did respond.

I don't see how. My question was:

Could you explain why you say that?

Referring to this that you said:

Or at least, you would get what is needed there [instead of discounting] "automatically" - if you just maximise expected utility.

You have still not explained why you said this. The question that discounting answers is, "Which is better: saving 3 lives today or saving 4 lives in 50 years?" Which is the same question as "Which of the two has the higher expected utility in current utilons?" We want to maximize expected current utility regardless of what we decide regarding discounting.

However, since you do bring up the idea of maximizing expected utility, I am very curious how you can simultaneously claim (elsewhere on this thread) that utilities are figures of merit attached to actions rather than outcomes. Are you suggesting that we should be assessing our probability distribution over actions and then adding together the products of those probabilities with the utility of each action?

Comment author: timtyler 20 November 2010 08:56:24AM *  1 point [-]

Many factors "automatically" lead to temporal discounting if you don't wire it in. The list includes:

  • Agents are mortal - they might die before the future utility arrives
  • Agents exhibit senescence - the present is more valuable to them than the future, because they are younger and more vital;
  • The future is uncertain - agents have limited capacities to predict the future;
  • The future is hard to predicably influence by actions taken now;

I think considerations such as the ones listed above adequately account for most temporal discounting in biology - though it is true that some of it may be the result of adaptations to deal with resource-limited cognition, or just plain stupidity.

Note that the list is dominated by items that are a function of the capabilities and limitations of the agent in question. If the agent conquers senescence, becomes immortal, or improves its ability to predict or predictably influence the future, then the factors all change around. This naturally results in a different temporal discounting scheme - so long as it has not previously been wired into the agent by myopic forces.

Basically, temporal discounting can often usefully be regarded as instrumental. Like energy, or gold, or warmth. You could specify how much each of these things is valued as well - but if you don't they will be assigned instrumental value anyway. Unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it. Tell the agent what state of affairs you actually want - and let it figure out the details of how best to get it for you.

Temporal discounting contrasts with risk aversion in this respect.

Comment author: Perplexed 20 November 2010 04:25:58PM 0 points [-]

Basically, temporal discounting can often usefully be regarded as instrumental.

Quite true. I'm glad you included that word "often". Now we can discuss the real issue: whether that word "often" should be changed to "always" as EY and yourself seem to claim. Or whether utility functions can and should incorporate the discounting of the value of temporally distant outcomes and pleasure-flows for reasons over and above considerations of instrumentality.

Temporal discounting contrasts with risk aversion in this respect.

A useful contrast/analogy. You seem to be claiming that risk aversion is not purely instrumental; that it can be fundamental; that we need to ask agents about their preferences among risky alternatives, rather than simply axiomatizing that a rational agent will be risk neutral.

But I disagree that this is in contrast to the situation with temporal discounting. We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them. This is particularly the case when we consider questions like the moral value of a human life.

The question before us is whether I should place the same moral value now on a human life next year and a human life 101 years from now. I say 'no'; EY (and you?) say yes. What is EY's justification for his position? Well, he might invent a moral principle that he might call "time invariance of moral value" and assert that this principle absolutely forces me to accept the equality:

  • value@t(life@t+1) = value@t(life@t+101).

I would counter that EY is using the invalid "strong principle of time invariance". If one uses the valid "weak principle of time invariance" then all that we can prove is that:

  • value@t(life@t+1) = value@t+100(life@t+101)

So, we need another moral principle to get to where EY wants to go. EY postulates that the moral discount rate must be zero. I simply reject this postulate (as would the bulk of mankind, if asked). EY and I can both agree to a weaker postulate, "time invariance of moral preference". But this only shows that the discounting must be exponential in time; it doesn't show that the rate must be zero.

Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero. Admittedly, I have yet to give any reason why it should be set elsewhere. This is not the place to do that. But I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon. EY says "So come up with better math!" - a response worth taking seriously. But until we have that better math in hand, I am pretty sure EY is wearing the crackpot hat here, not me.

Comment author: timtyler 20 November 2010 05:03:04PM *  1 point [-]

Now we can discuss the real issue: whether that word "often" should be changed to "always" as EY and yourself seem to claim.

You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.

We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them.

If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.

Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero.

I don't really see this as a "should" question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as "unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it."

I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon.

Not a practical issue - IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.

This is the same as in biology. Organisms act as though they want to become ancestors - not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.

Comment author: Perplexed 20 November 2010 05:28:32PM 1 point [-]

there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning ...

You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of "wiring in". Unfortunately, I failed to do so.

My position would be that human beings often have discount factors "wired in" by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn't make it any less fundamental from the standpoint of the person born with this instinct.

As for moral values that we insert into AIs, these too are instrumental in terms of their final cause - we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily 'wired in'. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind - if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won't be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.

I'm saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not 'wired in' either. And especially not a wired-in discount factor of exactly zero!

Comment author: timtyler 20 November 2010 09:19:59AM *  1 point [-]

Regarding utility, utilities are just measures of satisfaction. They can be associated with anything.

It is a matter of fact that utilities are associated with actions in most agents - since agents have evolved to calculate utilities in order to allow them to choose between their possible actions.

I am not claiming that utilities are not frequently associated with outcomes. Utilities are frequently linked to outcomes - since most evolved agents are made so in such a way that they like to derive satisfaction by manipulating the external world.

However, nowhere in the definition of utility does it say that utilities are necessarily associated with external-world outcomes. Indeed, in the well-known phenomena of "wireheading" and "drug-taking" utility is divorced from external-world outcomes - and deliberately manufactured.

Comment author: Perplexed 20 November 2010 04:40:28PM 0 points [-]

utilities are just measures of satisfaction. They can be associated with anything.

True. But in most economic analysis, terminal utilities are associated with outcomes; the expected utilities that become associated with actions are usually instrumental utilities.

Nevertheless, I continue to agree with you that in some circumstances, it makes sense to attach terminal utilities to actions. This shows up, for example, in discussions of morality from a deontological viewpoint. For example, suppose you have a choice of lying or telling the truth. You assess the consequences of your actions, and are amused to discover that there is no difference in the consequences - you will not be believed in any case. A utilitarian would say that there is no moral difference in this case between lying and telling the truth. A Kant disciple would disagree. And the way he would explain this disagreement to the utilitarian would be to attach a negative moral utility to the action of speaking untruthfully.

Comment author: timtyler 20 November 2010 06:32:41PM *  1 point [-]

Utilities are often associated with states of the world, yes. However, here you seemed to balk at utilities that were not so associated. I think such values can still be called "utilities" - and "utility functions" can be used to describe how they are generated - and the standard economic framework accommodates this just fine.

What this idea doesn't fit into is the von Neumann–Morgenstern system - since it typically violates the independence axiom. However, that is not the end of the world. That axiom can simply be binned - and fairly often it is.

Comment author: Perplexed 20 November 2010 08:11:10PM 0 points [-]

What this idea doesn't fit into is the von Neumann–Morgenstern system - since it typically violates the independence axiom.

Unless you supply some restrictions, it is considerably more destructive than that. All axioms based on consequentialism are blown away. You said yourself that we can assign utilities so as to rationalize any set of actions that an agent might choose. I.e. there are no irrational actions. I.e. decision theory and utility theory are roughly as useful as theology.