Wei_Dai comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 13 August 2010 04:11:12AM 6 points [-]

Is morality actually:

  1. a specific algorithm/dynamic for judging values, or
  2. a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc.?

If it's 1, can we say something interesting and non-trivial about the algorithm, besides the fact that it's an algorithm? In other words, everything can be viewed as an algorithm, but what's the point of viewing morality as an algorithm?

If it's 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say "morality"? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?

I'm guessing the answer to my first question is something like, morality is an algorithm whose current "state" is a complicated blob of values like happiness, love, ... so both of my other questions ought to apply.

Comment author: Vladimir_M 13 August 2010 05:51:16AM 1 point [-]

Wei_Dai:

If it's 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say "morality"? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?

You don't even have to do any cross-cultural comparisons to make such an argument. Considering the insights from modern behavioral genetics, individual differences within any single culture will suffice.

Comment author: [deleted] 13 August 2010 05:07:00AM 1 point [-]

but what about cultural/memetic evolution?

There is no reason to be at all tentative about this. There's tons of cog sci data about what people mean when they talk about morality. It varies hugely (but predictably) across cultures.

Comment author: Sniffnoy 13 August 2010 05:18:11AM *  0 points [-]

Why are you using algorithm/dynamic here instead of function or partial function? (On what space, I will ignore that issue, just as you have...) Is it supposed to be stateful? I'm not even clear what that would mean. Or is function what you mean by #2? I'm not even really clear on how these differ.

Comment author: Wei_Dai 13 August 2010 06:31:23AM 1 point [-]

You might have gotten confused because I quoted Psy-Kosh's phrase "specific algorithm/dynamic for judging values" whereas Eliezer's original idea I think was more like an algorithm for changing one's values in response to moral arguments. Here are Eliezer's own words:

I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don't really have - I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them.

Comment author: Unknowns 13 August 2010 06:43:09AM 2 points [-]

Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.

Comment author: Wei_Dai 13 August 2010 07:02:33AM 5 points [-]

Others have pointed out that this definition is actually quite unlikely to be coherent

Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.

I think the metaethics sequence is probably the weakest of Eliezer's sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.

Comment author: wedrifid 13 August 2010 07:24:05AM 3 points [-]

I think the metaethics sequence is probably the weakest of Eliezer's sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.

This is somewhat of a concern given Eliezer's interest in Friendliness!

Comment author: cousin_it 13 August 2010 08:23:16AM *  2 points [-]

As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person's brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it's time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.

Comment author: [deleted] 13 August 2010 07:07:36AM 1 point [-]

The linked discussion is very nice.