TimS comments on Less Wrong views on morality? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
Hank, why would Clippy believe that maximizing paperclips is based on something external to its own mind? Clippy could just as easily be programmed to desire staples, and Clippy is probably intelligent enough to know that.
That said, I think Jack's general point about the relationship between ethics and meta-ethics is probably right.
Presumably Clippy has a hard-coded utility function sitting in his source code somewhere. It's a real set of 0s and 1s sitting on a disk somewhere, and we could open the source file and investigate the code.
Clippy's value system is a specific objective, pre-programmed utility function that's inherent in his design and independent/prior to any knowledge or other cognitive content Clippy eventually gains or invents.
And yes, it could have been easily changed (until such a time as Clippy is all grown up and would prevent such change) to make him desire staples. But then that's a different design and we'd probably better call him Stapley at that point.
Ideally, one would like objective facts to be universally compelling. If Clippy shows its source code to me, or to another AGI, neither of us would update in favor of believing that paper-clip maximizing is an appropriate terminal value.
Ah, no, I don't mean "objective morality" in the sense of something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean what I said above, something in reality that's mind-independent and can be investigated scientifically - a definite "is" from which we can make true "ought" statements relative to that "is".
See drethelin's comment below.
Hank, I definitely don't think there's any activity that (1) can reasonably be labeled "scientific investigation" and (2) can solve the is-ought divide.
I didn't think you would :) I'm curious about the consensus on LW, though. But incidentally, what do you think of Thou Art Godshatter?
First, that essay is aimed primarily at those who think dualism is required in order to talk about morality at all - obviously that's not the discussion we are having.
Second, the issue is not whether there are (1) universal (2) morally relevant (3) human preferences that (4) have been created by evolution. The answer to that question is yes (i.e. hunger, sexual desire). But that alone does not show that there is a universal way for humans to resolve moral dilemma.
If we study and quantify the godshatter to the point that we can precisely describe "human nature," we aren't guaranteed in advance to know that appeals to human nature will resolve every moral dilemma. If reference to human nature doesn't, then evolutionary preference doesn't prove moral realism.
I'm not sure why "universality" is really that important here. Suppose we are just talking about one person, why can't they reduce their value judgments down to their own precisely described nature to resolve every moral dilemma they face? With a read-out of their actual terminal values defined by the godshatter, they can employ the usual consequentialist expected utility calculus to solve any question, in principle.
This says it better than I could.
But that's just a confusion between two different meanings "objective vs. subjective".
People apparently tend to interpret "objective" as something "universal" in the sense of like some metaphysical Form of Good, as opposed to "subjective" meaning "relative to a person". That distinction is completely stupid and wouldn't even occur to me.
I'm using it in the sense of, something relative to a person but still "a fact of reality able to be investigated by science that is independent/prior to any of the mind's later acquisition of knowledge/content", versus "something that is not an independent/prior fact of reality, but rather some later invention of the mind".
Clippy's code is his mind.
No. The physical instantiation of a utility function is not an argument for moral realism. On the complete contrary, defining moral actions as "whatever an agents utility function says" is straight-forward, definitional, no-bones-about-it moral subjectivism.
Put it this way: the paperclip maximizer is not going to approve of your behavior.