Vladimir_Nesov comments on Open Thread: April 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (127)
I have a question for Eliezer. I went back and reread your sequence on metaethics, and the amount of confusion in the comments struck me, so now I want to make sure that I understood you correctly. After rereading, my interpretation didn't change, but I'm still unsure. So, does this summarize your position accurately:
A simple mind has a bunch of terminal values (or maybe one) summarized in a utility function. Morality for it, or rather not morality, but the thing this mind has which is analogous to morality in humans (depending on how you define "morality") is summed up in this utility function. This is the only source of shouldness for that simple mind.
For humans, the situation is more complex. We have preferences which are like a utility function, but aren't because we aren't expected utility maximizers. Moreover, these preferences change depending on a number of factors. But this isn't the source of shouldness we are looking for. Buried deep in the human mind is a legitimate utility function, or at least something like one, which summarizes that human's terminal values, thus providing that source of shouldness. This utility function is very hard to discover due to the psychology of humans, but it exists. The preference set of any given human has is an approximation of that human's utility function (though not necessarily a good one) subject, of course, to the many biases humans are fraught with.
The final essential point is that, due to the psychological unity of mankind, the utility functions of each person are likely to be very similar, if not the same, so when we call something "right" or "moral" we are referring to (nearly) the same thing.
Does that sound right?
Sounds about right, except that I wouldn't call this anything close to a summary of the whole position. Also, compare the status of morality with that of probability (e.g. Probability is Subjectively Objective, Can Counterfactuals Be True?, Math is Subjunctively Objective).
I'm not sure what do you see in the distinction between simple preference and complex preference. No matter how simple an imperfect agent is, you face a problem of going from imperfect decision-making to ideal preference order.
I don't mean simple or complicated preferences. I mean a simple mind (perhaps simple was a bad choice of terminology). My "simple mind" is a mind that perfectly knows it's utility function (and has a well-defined utility function to begin with). It's just an abstraction to better understand where shouldness comes from.