Lumifer comments on Welcome to Less Wrong! (7th thread, December 2014) - Less Wrong

16 Post author: Gondolinian 15 December 2014 02:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (635)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 27 March 2015 02:45:55PM 1 point [-]

There is a set of reasonably objective facts about what values people have, and hhow your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation.

Nope. It leads to reasonably objective descriptive answers about what the consequences of your actions will be. It does not lead to normative answers about what you should or should not do.

Comment author: [deleted] 27 March 2015 05:45:47PM 2 points [-]

Okay, I guess I'm still confused. So far I've loved everything I've read on this site and have been able to understand; I've appreciated/agreed with the first 110 pages of the Rationality ebook, felt a little skeptical for liking it so completely, and then reassured myself with the Aumann's agreement theorem it mentions. So I feel like if this utility theorem which bases morality on preferences is commonly accepted around here, I'll probably like it once I fully understand it. So bear with me as I ask more questions...

  1. Whose preferences am I valuing? Only my own? Everyone's equally? Those of an "average human"? What about future humans?

  2. Yeah, by subjective, I meant that different humans would care about different things. I'm not really worried about basic morality, like not beating people up and stuff, but...

I have a feeling the hardest part of morality will now be determining where to strike a balance between individual human freedom and concern for the future of humanity.

Like, to what extent is it permissible to harm the environment? If something, like eating sugar for example, makes people dumber, should it be limited? Is population control like China's a good thing?

Can you really say that most humans agree on where this line between individual freedom and concern for the future of humanity should be drawn? It seems unlikely...

Comment author: Lumifer 27 March 2015 07:20:56PM 1 point [-]

I'm the wrong person to ask about "this utility theorem which bases morality on preferences" since I don't really subscribe to this point of view.

I use the world "morality" as a synonym for "system of values" and I think that these values are multiple, somewhat hierarchical, and are NOT coherent. Moral decisions are generally taken on the basis of a weighted balance between several conflicting values.

Comment author: dxu 27 March 2015 06:19:23PM *  0 points [-]
  1. By definition, you can only care about your own preferences. That being said, it's certainly possible for you to have a preference for other people's preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.

  2. The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.

Comment author: seer 30 March 2015 12:33:41AM 6 points [-]

Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough.

Do they have any arguments for this besides wishful thinking?

Comment author: hairyfigment 30 March 2015 12:44:36AM 0 points [-]

I told him "they" assume no such thing - his own link is full of talk about how to deal with disagreements.

Comment author: seer 30 March 2015 12:59:34AM 4 points [-]

Yes, I've read most of the arguments, they strike me as highly speculative and hand-wavy.

Comment author: hairyfigment 30 March 2015 01:10:40AM *  -1 points [-]

This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. "Most Friendly AI theorists" do not appear to think that extrapolation will bring all human values into agreement, so I don't know what "arguments" you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).

ETA: This is the link to the full sub-thread. Note my response to dxu.

Comment author: [deleted] 27 March 2015 08:53:50PM 0 points [-]
  1. Okay, that makes sense, but does this mean you can't say someone else did something wrong, unless he was acting inconsistently with his personal preferences?

  2. Ah, okay, I've been reading most hyperlinks here, but that one looks pretty long, so I will come back to it after I finish Rationality (or maybe my question will even be answered later on in the book...)

Comment author: hairyfigment 27 March 2015 06:45:36PM -1 points [-]

That is definitely not the idea behind CEV, though it may reflect the idea that a sizable majority will mostly share the same values under extrapolation.

Comment author: TheAncientGeek 27 March 2015 08:06:37PM 0 points [-]

Is that a fact? It's true that the theories often discussed here , utilitarianism and so in, don't solve the motivation problem, but that doesn't mean no theory does,