You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Why Do We Engage in Moral Simplification? - Less Wrong Discussion

24 Post author: Wei_Dai 14 February 2011 01:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 14 February 2011 03:39:20AM *  3 points [-]

I agree with that, but feel that they don't shift by very much. And when they do shift, the causality might well run in the other direction: sometimes we change our professed morality to justify our preferred actions. And most of our actions are caused by reasons other than our current professed morality anyway, so it's not likely to play a large role in the preferences that CEV will infer from us.

Comment author: Wei_Dai 14 February 2011 04:17:46AM *  5 points [-]

If we consider a human as a group of agents with different values, we could say that the conscious self's values are greatly shifted when adopting a moral system, but its power is limited, because most of the human's actions are not under its direct control. For example, someone might eat too much and gain weight as a result, even if that is against their conscious desires. Depending on technological advances, that power balance could be changed, say if someone came up with a pill lets you control your appetite.

FAI essentially let's the conscious self have total dominance, if it chooses to. Why should CEV weigh its values according to the balance of power as of 2011?

Comment author: Vladimir_Nesov 14 February 2011 10:47:56AM 3 points [-]

If we consider a human as a group of agents with different values

Things like this is why it looks like a good idea to me to taboo "values". Human includes many heuristics that together add up to what counts as an "agent". Separate aspects/parts of a human include fewer heuristics, which makes these parts less like agents, and "values" for these parts become even less defined than for the whole.

So "human as group of agents with different values" translates as "human as a collection of parts with different structure", which sounds far less explanatory (as it should).

Comment author: Wei_Dai 14 February 2011 07:34:14PM *  0 points [-]

I agree that sometimes it can be useful to taboo "values". But I'm not sure why it would be helpful to taboo it here. I could rephrase my comment as saying that the subset of heuristics that corresponds to the conscious self, after adopting a new moral system, would cause a large shift in actions if it could (i.e., was given tools to overpower other conflicting heuristics), so it's not clear that adopting new moral systems should or would have little effect on CEV. Does tabooing "values" bring any new insights to this discussion?

Comment author: Vladimir_Nesov 14 February 2011 08:03:55PM *  0 points [-]

Does tabooing "values" bring any new insights to this discussion?

Probably not, but it lifts the illusion of understanding, which tabooing is all about. It's good practice to avoid unnecessary imprecision or harmless equivocation.

(Also, I'd include all the heuristics into "conscious self", not just some of them. They all have a hand in forming conscious decisions, and inability to know or precisely alter the workings of particular heuristics similarly applies to all of them. At least, the same criteria that exclude some of the heuristics from your conscious self should allow including external tools in it.)