From the last thread:
From Costanza's original thread (entire text):
"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Meta:
- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.
That seems likely. If moral realists think the morality is a one-place word, and anti realists think it's a two place word, we would be better served by using two distinct words.
It is somewhat unclear to me what moral realists are thinking of, or claiming, about whatever it is they call morality. (Even after taking into account that different people identified as moral realists do not all agree on the subject.)
I defined 'Friendliness (to X)' as 'behaving towards X in the way that is best for X in some implied sense'. Obviously there is no Friendliness towards everyone, but there might be Friendliness towards humans: then "Friendliness realism" (my coining) is the belief that there is a single Friendly-towards-humans behavior that will in fact be Friendly towards all humans. Whereas Friendliness anti-realism is the belief no one behavior would satisfy all humans, and it would inevitably be unFriendly towards some of them.
Clearly this discussion assumes many givens. Most importantly, 1) what exactly counts as being Friendly towards someone (are we utilitarian? what kind? must we agree with the target human as to what is Friendly towards them? If we influence them to come to like us, when is that allowed?). 2) what is the set of 'all humans'? Do past, distant, future expected, or entirely hypothetical people count? What is the value of creating new people? Etc.
My position is that: 1) for most common assumed answers to these questions, I am a "Friendliness anti-realist"; I do not believe any one behavior by a superpowerful universe-optimizing AI would count as Friendliness towards all humans at once. And 2), inasfar as I have seen moral realism explained, it seems to me to be incompatible with Friendliness realism. But it's possible some people mean something entirely different by "morals" and by "moral realism" than what I've read.
That's a tautology: yes I would. But, the assumption is not valid.
Even if you assume there exist objective moral facts (whatever you take that to mean), it does not follow that you would be able to convince other people that they are true moral facts! I believe it is extremely likely you would not be able to convince people - just as today most people in the world seem to be moral realists (mostly religious), and yet hold widely differing moral beliefs and when they convert to another set of beliefs it is almost never due to some sort of rational convincing.
It would be nice to live in a world where you could start from the premise that "people believe that there are objective moral facts and know the content of those facts". But in practice we, and any future FAI, will live in a world where most people will reject mere verbal arguments in favor of new morals contradicting their current ones.