Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.
(Carl's open thread for March was only a week ago or thereabouts, but if we're having these monthly then I think it's better for them to appear near -- ideally at -- the start of each month, to make it that little bit easier to find something when you can remember roughly when it was posted. The fact that that open thread has had 69 comments in that time seems like good evidence that "almost anyone can post articles" is sufficient reason for not bothering with open threads.)
[EDIT, 2009-04-04: oops, I meant "is NOT sufficient reason" in that last sentence. D'oh.]
I have a question for Eliezer. I went back and reread your sequence on metaethics, and the amount of confusion in the comments struck me, so now I want to make sure that I understood you correctly. After rereading, my interpretation didn't change, but I'm still unsure. So, does this summarize your position accurately:
A simple mind has a bunch of terminal values (or maybe one) summarized in a utility function. Morality for it, or rather not morality, but the thing this mind has which is analogous to morality in humans (depending on how you define "morality") is summed up in this utility function. This is the only source of shouldness for that simple mind.
For humans, the situation is more complex. We have preferences which are like a utility function, but aren't because we aren't expected utility maximizers. Moreover, these preferences change depending on a number of factors. But this isn't the source of shouldness we are looking for. Buried deep in the human mind is a legitimate utility function, or at least something like one, which summarizes that human's terminal values, thus providing that source of shouldness. This utility function is very hard to discover due to the psychology of humans, but it exists. The preference set of any given human has is an approximation of that human's utility function (though not necessarily a good one) subject, of course, to the many biases humans are fraught with.
The final essential point is that, due to the psychological unity of mankind, the utility functions of each person are likely to be very similar, if not the same, so when we call something "right" or "moral" we are referring to (nearly) the same thing.
Does that sound right?
No. It's more that if you extrapolate out the preferences we already have, asking what we would prefer if we had time for our chaotic preferences to resolve themselves, then you end up with a superior sort of shouldness to which our present preferences might well defer. Sort of like if you knew that your future self would be a vegetarian, you might regar... (read more)