SilasBarta comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 August 2010 10:59:37PM 7 points [-]

I dislike this. Here is why:

  • I dislike all examples involving omniscient beings.
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
  • I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
  • I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
  • I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.
  • Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.
  • That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the "honor" module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face.
  • And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.

Parental care doesn't force us to modify standard decision theory either. Parents clearly include their children's welfare in their own utility functions.

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

Maybe we need a new decision theory for AIs. I don't know; I have barely begun to consider the issues. But we definitely don't need a new one to handle human moral behavior. Not for these three examples, and not if we think that acting morally is rational.

Upvoted simply for bringing these issues into the open.

Comment author: SilasBarta 31 August 2010 01:49:01AM *  1 point [-]

Also, I should clarify another point:

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

My point was that I previously agreed with EY that the payoff matrix doesn't accurately represent how people would perceive the situation if they were in a LPDS, but that I now think that people's reaction to it could just as well be explained by assuming that they accept the canonical payoff matrix as accurate, but pursue those utilities under a constrained decision theory. And also, that their intuitions are due to that decision theory, not necessarily from valuing the outcomes differently.

Comment author: Perplexed 31 August 2010 02:25:10AM 1 point [-]

Ok, I think I see the distinction. I recognize that it is tempting to postulate a 2 part decision theory because it seems that we have two different kinds of considerations to deal with. It seems we just can't compare ethical motivations like loyalty with selfish motivations like getting a light sentence. "It is like comparing apples and oranges!", screams our intuition.

However my intuition has a piece screaming even louder, "It is one decision, you idiot! Of course you have to bring all of the various kinds of considerations together to make the decision. Shut up and calculate - then decide."