timtyler comments on Moral Error and Moral Disagreement - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (125)
I did respond. I didn't have an essay on the topic prepared - but Yu-El did, so I linked to that.
If you want to hear it in my own words:
Wiring in temporal discounting is usually bad - since the machine can usually figure out what temporal discounting is appropriate for its current circumstances and abilities much better than you can. It is the same as with any other type of proximate goal.
Instead you are usually best off just telling the machine your preferences about the possible states of the universe.
If you are thinking you want the machine to mirror your own preferences, then I recommend that you consider carefully whether your ultimate preferences include temporal discounting - or whether all that is just instrumental.
I don't see how. My question was:
Referring to this that you said:
You have still not explained why you said this. The question that discounting answers is, "Which is better: saving 3 lives today or saving 4 lives in 50 years?" Which is the same question as "Which of the two has the higher expected utility in current utilons?" We want to maximize expected current utility regardless of what we decide regarding discounting.
However, since you do bring up the idea of maximizing expected utility, I am very curious how you can simultaneously claim (elsewhere on this thread) that utilities are figures of merit attached to actions rather than outcomes. Are you suggesting that we should be assessing our probability distribution over actions and then adding together the products of those probabilities with the utility of each action?
Many factors "automatically" lead to temporal discounting if you don't wire it in. The list includes:
I think considerations such as the ones listed above adequately account for most temporal discounting in biology - though it is true that some of it may be the result of adaptations to deal with resource-limited cognition, or just plain stupidity.
Note that the list is dominated by items that are a function of the capabilities and limitations of the agent in question. If the agent conquers senescence, becomes immortal, or improves its ability to predict or predictably influence the future, then the factors all change around. This naturally results in a different temporal discounting scheme - so long as it has not previously been wired into the agent by myopic forces.
Basically, temporal discounting can often usefully be regarded as instrumental. Like energy, or gold, or warmth. You could specify how much each of these things is valued as well - but if you don't they will be assigned instrumental value anyway. Unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it. Tell the agent what state of affairs you actually want - and let it figure out the details of how best to get it for you.
Temporal discounting contrasts with risk aversion in this respect.
Quite true. I'm glad you included that word "often". Now we can discuss the real issue: whether that word "often" should be changed to "always" as EY and yourself seem to claim. Or whether utility functions can and should incorporate the discounting of the value of temporally distant outcomes and pleasure-flows for reasons over and above considerations of instrumentality.
A useful contrast/analogy. You seem to be claiming that risk aversion is not purely instrumental; that it can be fundamental; that we need to ask agents about their preferences among risky alternatives, rather than simply axiomatizing that a rational agent will be risk neutral.
But I disagree that this is in contrast to the situation with temporal discounting. We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them. This is particularly the case when we consider questions like the moral value of a human life.
The question before us is whether I should place the same moral value now on a human life next year and a human life 101 years from now. I say 'no'; EY (and you?) say yes. What is EY's justification for his position? Well, he might invent a moral principle that he might call "time invariance of moral value" and assert that this principle absolutely forces me to accept the equality:
I would counter that EY is using the invalid "strong principle of time invariance". If one uses the valid "weak principle of time invariance" then all that we can prove is that:
So, we need another moral principle to get to where EY wants to go. EY postulates that the moral discount rate must be zero. I simply reject this postulate (as would the bulk of mankind, if asked). EY and I can both agree to a weaker postulate, "time invariance of moral preference". But this only shows that the discounting must be exponential in time; it doesn't show that the rate must be zero.
Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero. Admittedly, I have yet to give any reason why it should be set elsewhere. This is not the place to do that. But I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon. EY says "So come up with better math!" - a response worth taking seriously. But until we have that better math in hand, I am pretty sure EY is wearing the crackpot hat here, not me.
You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.
If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.
I don't really see this as a "should" question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as "unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it."
Not a practical issue - IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.
This is the same as in biology. Organisms act as though they want to become ancestors - not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.
You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of "wiring in". Unfortunately, I failed to do so.
My position would be that human beings often have discount factors "wired in" by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn't make it any less fundamental from the standpoint of the person born with this instinct.
As for moral values that we insert into AIs, these too are instrumental in terms of their final cause - we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily 'wired in'. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind - if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won't be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.
I'm saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not 'wired in' either. And especially not a wired-in discount factor of exactly zero!
I think we want a minimally myopic superintelligence - and fairly quickly. We should not aspire to program human limitations into machines - in a foolish attempt to mirror their values. If the Met. Office computer is handling orders asking it to look three months out - and an ethtics graduate says that it too future-oriented for a typical human, and it should me made to look less far out, so it better reflects human values - he should be told what an idiot he is being.
We use machines to complement human capabilities, not just to copy them. When it comes to discounting the future, machines will be able to see and influence furtther - and we would be well-advised let them.
Much harm is done today due to temporal discounting. Governments look no further than election day. Machines can help put a stop to such stupidity and negligence - but we have to know enough to let them.
As Eleizer says, he doesn't propose doing much temporal discounting - except instrumentally. That kind of thing can be expected to go up against the wall as part of the "smarter, faster, wiser, better" part of his CEV.
And so we are in disagreement. But I hope you now understand that the disagreement is because our values are different rather than because I don't understand the concept of values. Ironically our values differ in that I prefer to preserve my values and those of my conspecifics beyond the Singularity, whereas you distrust those values and the flawed cognition behind them, and you wish to have those imperfect human things replaced by something less messy.
I don't see myself as doing any non-instrumental temporal discounting in the first place. So, for me personally, losing my non-instrumental temporal discounting doesn't seem like much of a loss.
However, I do think that our temporal myopia is going to fall by the wayside. We will stop screwing over the immediate future because we don't care about it enough. Myopic temporal discounting represents a primitive form of value - which is destined to go the way of cannibalism and slavery.
Regarding utility, utilities are just measures of satisfaction. They can be associated with anything.
It is a matter of fact that utilities are associated with actions in most agents - since agents have evolved to calculate utilities in order to allow them to choose between their possible actions.
I am not claiming that utilities are not frequently associated with outcomes. Utilities are frequently linked to outcomes - since most evolved agents are made so in such a way that they like to derive satisfaction by manipulating the external world.
However, nowhere in the definition of utility does it say that utilities are necessarily associated with external-world outcomes. Indeed, in the well-known phenomena of "wireheading" and "drug-taking" utility is divorced from external-world outcomes - and deliberately manufactured.
True. But in most economic analysis, terminal utilities are associated with outcomes; the expected utilities that become associated with actions are usually instrumental utilities.
Nevertheless, I continue to agree with you that in some circumstances, it makes sense to attach terminal utilities to actions. This shows up, for example, in discussions of morality from a deontological viewpoint. For example, suppose you have a choice of lying or telling the truth. You assess the consequences of your actions, and are amused to discover that there is no difference in the consequences - you will not be believed in any case. A utilitarian would say that there is no moral difference in this case between lying and telling the truth. A Kant disciple would disagree. And the way he would explain this disagreement to the utilitarian would be to attach a negative moral utility to the action of speaking untruthfully.
Utilities are often associated with states of the world, yes. However, here you seemed to balk at utilities that were not so associated. I think such values can still be called "utilities" - and "utility functions" can be used to describe how they are generated - and the standard economic framework accommodates this just fine.
What this idea doesn't fit into is the von Neumann–Morgenstern system - since it typically violates the independence axiom. However, that is not the end of the world. That axiom can simply be binned - and fairly often it is.
Unless you supply some restrictions, it is considerably more destructive than that. All axioms based on consequentialism are blown away. You said yourself that we can assign utilities so as to rationalize any set of actions that an agent might choose. I.e. there are no irrational actions. I.e. decision theory and utility theory are roughly as useful as theology.
No, no! That is like saying that a universal computer is useless to scientists - because it can be made to predict anything!
Universal action is a useful and interesting concept partly because it allows a compact, utility-based description of arbitrary computable agents. Once you have a utility function for an agent, you can then combine and compare its utility function with that of other agents, and generally use the existing toolbox of economics to help model and analyse the agent's behaviour. This is all surely a Good Thing.
I've never seen the phrase universal action before. Googling didn't help me. It certainly sounds like it might be an interesting concept. Can you provide a link to an explanation more coherent than the one you have attempted to give here?
As to whether a "utility-based" description of an agent that does not adhere to the standard axioms of utility is a "good thing" - well I am doubtful. Surely it does not enable use of the standard toolbox of economics, because that toolbox takes for granted that the participants in the economy are (approximately) rational agents.
Universal action is named after universal computation and universal construction.
Universal construction and universal action have some caveats about being compatible with constraints imposted by things like physical law. "Doing anything" means something like: being able to feed arbitrary computable sequences in parallel to your motor outputs. Sequences that fail due to severing your own head don't violate the spirit of the idea, though. As with universal computation, universal action is subject to resource limitations in practice. My coinage - AFAIK. Attribution: unpublished manuscript ;-)
Well, I'll just ignore the fact that universal construction means to me something very different than it apparently means to you. Your claim seems to be that we can 'program' a machine (which is already known to maximize utility) so as to output any sequence of symbols we wish it to output; program it by the clever technique of assigning a numeric utility to each possible infinite output string, in such a way that we attach the largest numeric utility to the specific string that we want.
And you are claiming this in the same thread in which you disparage all forms of discounting the future.
What am I missing here?
You have an alternative model of arbitrary computable agents to propose?
You don't think the ability to model an arbitrary computable agent is useful?
What is the problem here? Surely a simple utility-based framework for modelling the computable agent of your choice is an obvious Good Thing.
I see no problem modeling computable agents without even mentioning "utility".
I don't yet see how modeling them as irrational utility maximizers is useful, since a non-utility-based approach will probably be simpler.