Oscar_Cunningham comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 August 2010 10:59:37PM 7 points [-]

I dislike this. Here is why:

  • I dislike all examples involving omniscient beings.
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
  • I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
  • I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
  • I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.
  • Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.
  • That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the "honor" module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face.
  • And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.

Parental care doesn't force us to modify standard decision theory either. Parents clearly include their children's welfare in their own utility functions.

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

Maybe we need a new decision theory for AIs. I don't know; I have barely begun to consider the issues. But we definitely don't need a new one to handle human moral behavior. Not for these three examples, and not if we think that acting morally is rational.

Upvoted simply for bringing these issues into the open.

Comment author: Oscar_Cunningham 31 August 2010 06:21:46PM 0 points [-]

I dislike all examples involving omniscient beings.

I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.

The only thing Omega uses its omniscience for is to detect if you're lying, so if humans are bad at convincing lying you don't need omniscience.

Also, "prefer to assume" indicates extreme irrationallity, you can't be rational if you are choosing what to believe based on anything other than the evidence, see Robin Hanson's post You Are Never Entitled to Your Opinion. Of course you probably didn't mean that, you probably just meant:

Natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.

Say what you mean, otherwise you end up with Belief in Belief.

Comment author: Perplexed 31 August 2010 06:30:24PM 2 points [-]

As I have answered repeatedly on this thread, when I said "prefer to assume", I actually meant "prefer to assume". If you are interpreting that as "prefer to believe" you are not reading carefully enough.

One makes (sometimes fictional) assumptions when constructing a model. One is only irrational when one imagines that a model represents reality.

If it makes you happy, insert a link to some profundity by Eliezer about maps and territories at this point in my reply.

Comment author: Oscar_Cunningham 31 August 2010 07:45:14PM 1 point [-]

Heh, serve me right for not paying attention.

Comment author: Perplexed 31 August 2010 06:50:53PM 1 point [-]

The only thing Omega uses its omniscience for is to detect if you're lying...

If I understand the OP correctly, it is important to him that this example not include any chit-chat between the hitchhiker and Omega. So what Omega actually detects is propensity to pay, not lying.

Minor point.

Comment author: SilasBarta 31 August 2010 07:06:04PM 0 points [-]

In the ideal situation, it's important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of "no chit-chat" but add the difficulty of lying.

Yes, this allows you (in the realistic scenario) to use an "honor hack" to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.

Comment author: Perplexed 31 August 2010 08:29:53PM *  2 points [-]

Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?

My off-the-cuff objections to "constraints" were expressed on another branch of this discussion

It is pretty clear that you and I have different "aesthetics" as to what counts as a "complication".

Comment author: SilasBarta 31 August 2010 08:53:24PM 0 points [-]

Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?

The answers determine whether you're trying to make your own decision theory reflectively consistent, or looking at someone else's. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what's important is that SAMELs' explanatory power be used at all.

My off-the-cuff objections to "constraints" were expressed on another branch of this discussion.

Like I said in my first reply to you, the revealed preferences don't uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.

It is pretty clear that you and I have different "aesthetics" as to what counts as a "complication".

Well, you're entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone's actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you'd have to make anyway.