PK2
PK2 has not written any posts yet.

To clarify my question, what is the point of all this talk about "morality" if it all amounts to "just do what you think is right"? I mean other than the futility of looking for The True Source of morality outside yourself. I guess I may have answered my own question if this was the whole point. So now what? How do I know what is moral and what isn't? I mean I can answer the easy question but how do I solve the hard ones? I was expecting to get easy answers to moral questions from your theory Eliezer. I feel cheated now.
What is "morality" for?(not morality) The "morality" concept seems so slippery at this point that it might be better to use other words to more clearly communicate meaning.
@Tiiba I think you nailed it on the head. That is pretty much my view but you worded it better than I ever could. There is no The Meta-Morality. There are multiple possible memes(moralities and meta-moralities) and some work better than others at producing and keeping civilizations from falling apart.
@Eliezer I am very interested in reading your meta-morality theory. Do you think it will be universally compelling to humans, or at least non brain damaged humans? Assuming there are humans out there who would not accept the theory, I am curious how those who do accept the theory 'should' react to them.
As for myself, I have my own idea of a meta-morality but it's... (read more)
Question for Obert: Suppose there are intelligent aliens in a galaxy far far away... There is a pretty good chance they will discover math. They might use different symbols and they might represent their data differently but they will discover math because the universe pretty much runs on math. To them 2 + 3 will equal 5. Would they discover morality? Would their 'morality' be the same thing as our 'morality' here? Does morality converge into one thing like math does, no matter where you start from?
Maybe it wouldn't be such a bad thing if humanity was overcome by the zombie virus. I mean look at those skirts, they are very short, very!
It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI. Granted, marketing wise a 'friendliness' infused AI sounds better because it makes those who seek to build such AI seem altruistic. Anyone saying they intend to implement the former seems selfish and power hungry.