Pavitra comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 August 2010 10:59:37PM 7 points [-]

I dislike this. Here is why:

  • I dislike all examples involving omniscient beings.
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
  • I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
  • I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
  • I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.
  • Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.
  • That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the "honor" module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face.
  • And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.

Parental care doesn't force us to modify standard decision theory either. Parents clearly include their children's welfare in their own utility functions.

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

Maybe we need a new decision theory for AIs. I don't know; I have barely begun to consider the issues. But we definitely don't need a new one to handle human moral behavior. Not for these three examples, and not if we think that acting morally is rational.

Upvoted simply for bringing these issues into the open.

Comment author: Pavitra 31 August 2010 02:04:14AM 1 point [-]

When you say you "prefer to assume", do you mean:

  1. you want to believe?

  2. your prior generally favors such? What evidence would persuade you to change your mind?

  3. you arrived at this belief through evidence? What evidence persuaded you?

  4. none of the above? Please elaborate.

  5. not even 4 is right -- my question is wrong? Please elaborate.

Comment author: Perplexed 31 August 2010 02:15:24AM *  2 points [-]

When you say you "prefer to assume", [what] do you mean?

4

I mean that making assumptions as I suggest leads to a much more satisfactory model of the issues being discussed here. I don't claim my viewpoint is closer to reality (though the lack of an omniscient Omega certainly ought to give me a few points for style in that contest!). I claim that my viewpoint leads to a more useful model - it makes better predictions, is more computationally tractable, is more suggestive of ways to improve human institutions, etc. All of the things you want a model to do for you.

Comment author: Pavitra 31 August 2010 02:36:36AM 0 points [-]

But how did you come to locate this particular model in hypothesis-space? Surely some combination of 2 and 3?

Comment author: Perplexed 31 August 2010 02:58:30AM *  2 points [-]

I read it in a book. It is quite standard.

And I'm pretty sure that the people who first invented it were driven by modeling motivations, rather than experiment. Mathematical techniques already exist to solve maximization problems. The first field which really looked at the issues in a systematic way was microeconomics - and this kind of model is the kind of thing that would occur to an economist. It all fits together into a pretty picture; most of the unrealistic aspects don't matter all that much in practice; bottom line is that it is the kind of model that gets you tenure if you are an Anglo-American econ professor.

Really and truly, the motivation was almost certainly not "Is this the way it really works?". Rather it was, "What is a simple picture that captures the main features of the truth, where "main" means the aspects that I can, in principle, quantify?"