From the last thread:
From Costanza's original thread (entire text):
"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Meta:
- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.
The existence of moral disagreement, standing alone, is not enough to show moral realism is false. After all, scientific disagreement doesn't show physical realism is false.
Further, I am confused by your portrayal of moral realists. Presumably, the reality of moral facts would show that people acting contrary to those facts were making a mistake, much like people who thought "Objects in motion will tend to come to a stop" were making a mistake. It seems strange to call correcting that mistake "ignoring everyone's actual scientific feelings." Likewise, if I am unknowingly doing wrong, and you can prove it, I would not view that correction as ignoring my moral feelings - I want to do right, not just think I am doing right.
In short, I think that the position you are labeling "moral realist" is just a very confused version of moral anti-realism. Moral realists can and should reject that idea that the mere existence at any particular moment of moral disagreement is useful evidence of whether there is one right answer. In other words, a distinction should be made between the existence of moral disagreement and the long-term persistence of moral disagreement.
I didn't say that it was. Rather I pointed out the difference between morality and Friendliness.
For an AI to be able to be Friendly towards everyone requires not moral realism, but "friendliness realism" - which is basically the idea that a single behavior of the AI can satisfy everyone. This is clearly false if "everyone" means "all intelligences including aliens, other AIs, etc." It may be true if we restrict ourselves to &... (read more)