Zack_M_Davis comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
I can see why it would seem this way to you, but from our perspective, it just looks like people around here tend to have background knowledge that you don't. More specifically: most people here are moral anti-realists, and by rationality we only mean general methods for acquiring accurate world-models and achieving goals. When people with that kind of background are quick to reject claims like "Compassion is a universal moral value," it might superficially seem like they're being arbitrarily dismissive of unfamiliar claims, but we actually think we have strong reasons to rule out such claims. That is: the universe at its most basic level is described by physics, which makes no mention of morality, and it seems like our own moral sensibilities can be entirely explained by contingent evolutionary and cultural forces; therefore, claims about a universal morality are almost certainly false. There might be some sort of game-theoretic reason for agents to pursue the same strategy under some specific conditions---but that's really not the same thing as a universal moral value.
"Universal values" presumably refers to values the universe will converge on, once living systems have engulfed most of it.
If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.
If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.
Gould would no-doubt favour the first position - while Conway Morris would be on the side of the objectivists.
I expect that there's a lot of truth on the objectivist side - though perhaps contingency plays some non-trivial role.
The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either - yet those things are all universal.
Why should we care about this mere physical fact of which you speak? What has this mere "is" to do with whether "should" is "objective", whatever that last word means (and why should we care about that?)
Where did Tim say that we should?
If it's got nothing to do with shouldness, then how does it determine the truth-value of "moral objectivism"?
Hi, Eli! I'm not sure I can answer directly - here's my closest shot:
If there's a kind of universal moral attractor, then the chances seem pretty good that either our civilisation is on route for it - or else we will be obliterated or assimilated by aliens or other agents as they home in on it.
If it's us who are on route for it, then we (or at least our descendants) will probably be sympathetic to the ideas it represents - since they will be evolved from our own moral systems.
If we get obliterated at the hands of some other agents, then there may not necessarily be much of a link between our values and the ones represented by the universal moral attractor.
Our values might be seen as OK by the rest of the universe - and we fail for other reasons.
Or our morals might not be favoured by the universe - we could be a kind of early negative moral mutation - in which case we would fail because our moral values would prevent us from being successful.
Maybe it turns out that nearly all biological organisms except us prefer to be orgasmium - to bliss out on pure positive reinforcement, as much of it as possible, caretaken by external AIs, until the end. Let this be a fact in some inconvenient possible world. Why does this fact say anything about morality in that inconvenient possible world? Why is it a universal moral attractor? Why not just call it a sad but true attractor in the evolutionary psychology of most aliens?
It's a fact about morality in that world - if we are talking about morality as values - or the study of values - since that's what a whole bunch of creatures value.
Why is it a universal moral attractor? I don't know - this is your hypothetical world, and you haven't told me enough about it to answer questions like that.
Call it other names if you prefer.
What do you mean by "morality"? It obviously has nothing to do with the function I try to compute to figure out what I should be doing.
1 2 and 3 on http://en.wikipedia.org/wiki/Morality all seem OK to me.
I would classify the mapping you use between possible and actual actions to be one type of moral system.
Yeah, but Stefan's post was about AI, not about minds that evolved in our universe.
Also, there is a difference between moral universalism and moral objectivism. What your last sentence describes is universalism, while Stefan is talking about objectivism:
"My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such."
Agreed.
Assuming that I'm right about this:
http://alife.co.uk/essays/engineered_future/
...it seems likely that most future agents will be engineered. So, I think we are pretty-much talking about the same thing.
Re: universalism vs objectivism - note that he does use the "u" word.
"Universal values" is usually understood by way of an analogy to a universal law of nature. If there are universal values they are universal in the same way f=ma is universal. Importantly this does not mean that everyone at all times will have these values, only that the question of whether or not a person holds the right values can be answered by comparing their values to the "universal values".
There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe. The question of whether or not there are universal values does not traditionally bear on what beliefs people actually hold and the necessity of their holding them. It could be the case that there are universal values and that, by physical necessity, no one ever holds them. Similarly, there could be universal values that are held in some possible worlds and not others. This is all the result of the simply observation that ought cannot be derived from is. In the above comment you conflate about a half dozen distinct theses.
But all those things are pure descriptions. Only moral facts have prescriptive properties and while it is clear how convection supervenes on quarks it isn't clear how anything that supervenes on quarks could also tell me what to do. At the very least if quarks can tell you what to do it would be weird and spooky. If you hold that morality is only the set of facts that describe people's moral opinions and emotions (as you seem to) than you are a kind of moral anti-realist, likely a subjectivist or non-cognitivist.
Excellent, excellent point Jack.
This is poetry! Hope you don't mind me pasting something here I wrote in another thread:
"With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer."