RobinZ comments on A Less Wrong singularity article? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (210)
Correct. I'm a moral cognitivist; "should" statements have truth-conditions. It's just that very few possible minds care whether should-statements are true or not; most possible minds care about whether alien statements (like "leads-to-maximum-paperclips") are true or not. They would agree with us on what should be done; they just wouldn't care, because they aren't built to do what they should. They would similarly agree with us that their morals are pointless, but would be concerned with whether their morals are justified-by-paperclip-production, not whether their morals are pointless. And under ordinary circumstances, of course, they would never formulate - let alone bother to compute - the function we name "should" (or the closely related functions "justifiable" or "arbitrary").
Am I right in interpreting the bulk of the thread following this comment (excepting perhaps the FAI derail) as a dispute on the definition of "should"?
Yes, we're disputing definitions, which tends to become pointless. However, we can't seem to get away from using the word "should", so we might as well get it pinned down to something we can agree upon.
I think you are right.
The dispute also serves as a signal of what some parts of the disputants personal morality probably includes. This is fitting with the practical purpose that the concept 'should' has in general. Given what Eliezer has chosen as his mission this kind of signalling is a matter of life and death in the same way that it would have been in our environment of evolutionary adaptation. That is, if people had sufficient objection to Eliezer's values they would kill him rather than let him complete an AI.