ata comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: ata 13 August 2010 05:17:03AM *  1 point [-]

The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.

You may or may not find that convincing (you'll get to the arguments regarding that if you're reading the sequences), but assuming that is true, then "morality is a specific set of values" is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it's about instrumental values).

Comment author: wedrifid 13 August 2010 06:42:54AM 4 points [-]

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.

It is? That's a worry. Consider this a +1 for "That thesis is totally false and only serves signalling purposes!"

Comment author: ata 13 August 2010 11:42:32AM *  2 points [-]

It is?

I... think it is. Maybe I've gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it's the basis for expecting humanity's extrapolated volition to actually cohere.

Comment author: wedrifid 13 August 2010 02:07:37PM 4 points [-]

I seem to recall that it's the basis for expecting humanity's extrapolated volition to actually cohere.

This whole area isn't covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn't appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.

Note that I'm not denying that some people present (or usually just assume) the thesis you present. I'm just glad that there are usually others who argue against it!

Comment author: Jonathan_Graehl 14 August 2010 08:15:40AM 0 points [-]

solving an implicit cooperation problem

That's exactly what I took CEV to entail.

Comment author: [deleted] 13 August 2010 06:28:05AM 1 point [-]

Now this is a startling claim.

(you'll get to the arguments regarding that if you're reading the sequences)

Be more specific!

Comment author: Nisan 20 August 2010 04:05:21AM 0 points [-]

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.

Maybe it's true if you also specify "if they were fully capable of modifying their own moral intuitions." I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don't know whether it's true. It's what I imagine Eliezer means when he talks about "humanity growing up together".

This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It's been an integral part of every culture's moral evolution, and something like it needs to be part of CEV if it's going to actually converge.

Comment author: Vladimir_Nesov 13 August 2010 07:59:22PM *  0 points [-]

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.

That's not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can't have them imply exactly the same preference.

Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won't differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.