From the last thread:
From Costanza's original thread (entire text):
"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Meta:
- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.
People can hold different moral views. Sometimes these views are opposed and any compromise would be called immoral by at least one of them. Any AI that enforced such a compromise, would be called unFriendly by at least one of them.
Even for a moral realist (and I don't think well of that position), the above remains true, because people demonstrably have irreconcilably different moral views. If you're a moral realist, you have the choice of:
If you're a moral anti-realist, you can only choose 2, because no moral truth exists. That's the only difference stemming from being a moral realist or anti-realist.
Does this mean that Friendly-to-everyone AI is impossible in moral anti-realism? Certainly, because people have fundamental moral disagreements. But moral realism doesn't help! It just adds the option of following some "moral facts" which some or all humans disagree with, which is no better in terms of Friendliness than existing options. (If all humans agreed with some set of purported moral facts, people wouldn't have needed to invent the concept of moral facts in the first place.)
The existence of moral disagreement, standing alone, is not enough to show moral realism is false. After all, scientific disagreement doesn't show physical realism is false.
Further, I am confused by your portrayal of moral realists. Presumably, the reality of moral facts would show that people acting contrary to those facts were making a mistake, much like people who thought "Objects in motion will tend to come to a stop" were making a mistake. It seems strange to call correcting that mistake "ignoring everyone's actual scientific feelings.&... (read more)