Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: PK2 06 August 2008 03:30:21AM -1 points [-]

It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI. Granted, marketing wise a 'friendliness' infused AI sounds better because it makes those who seek to build such AI seem altruistic. Anyone saying they intend to implement the former seems selfish and power hungry.

Comment author: PK2 29 July 2008 09:21:44PM 0 points [-]

To clarify my question, what is the point of all this talk about "morality" if it all amounts to "just do what you think is right"? I mean other than the futility of looking for The True Source of morality outside yourself. I guess I may have answered my own question if this was the whole point. So now what? How do I know what is moral and what isn't? I mean I can answer the easy question but how do I solve the hard ones? I was expecting to get easy answers to moral questions from your theory Eliezer. I feel cheated now.

Comment author: PK2 29 July 2008 09:05:46PM 3 points [-]

What is "morality" for?(not morality) The "morality" concept seems so slippery at this point that it might be better to use other words to more clearly communicate meaning.

Comment author: PK2 28 July 2008 03:30:19PM 0 points [-]

@Tiiba I think you nailed it on the head. That is pretty much my view but you worded it better than I ever could. There is no The Meta-Morality. There are multiple possible memes(moralities and meta-moralities) and some work better than others at producing and keeping civilizations from falling apart.

@Eliezer I am very interested in reading your meta-morality theory. Do you think it will be universally compelling to humans, or at least non brain damaged humans? Assuming there are humans out there who would not accept the theory, I am curious how those who do accept the theory 'should' react to them.

As for myself, I have my own idea of a meta-morality but it's kind of rough at the moment. The gist of it involves bubbles. The basic bubble is the individual, than individual bubbles come together to form a new bubble containing the previous bubbles; families etc. etc. until you have the country bubbles and the world bubble. Any bubble can run under it's own rules as long as it doesn't interfere with other bubbles. If there is interference the smaller bubbles usually have priority over their own content. So for example no unconsented violence because individual bubbles have priority when it comes to their own bodies(content of individual bubbles), unless it's the only way to prevent them from harming other individuals. Private gay stuff between 2 consenting adults is ok because it's 2 individual bubbles coming together to make a 3d bubble and they have more say about their rules than anyone on the outside. Countries can have their own laws and rules but they may not hold or harm any smaller bubbles within them. At most they could expel them. Yeah it's still kind of rough. I've dreamed up this system with the idea that a centralized super intelligence would be enforcing the rules. It's probably not feasible without one. If this seems incomprehensible just ignore this paragraph.

Comment author: PK2 26 July 2008 02:22:03AM 1 point [-]

Question for Obert: Suppose there are intelligent aliens in a galaxy far far away... There is a pretty good chance they will discover math. They might use different symbols and they might represent their data differently but they will discover math because the universe pretty much runs on math. To them 2 + 3 will equal 5. Would they discover morality? Would their 'morality' be the same thing as our 'morality' here? Does morality converge into one thing like math does, no matter where you start from?