You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on Will AGI surprise the world? - Less Wrong Discussion

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 22 June 2014 02:36:58PM *  0 points [-]

Also, cooperation seems to be at least a large component of morality, while some believe morality should be derived entirely from game theory.

Comment author: Squark 22 June 2014 04:00:10PM 2 points [-]

I think this is a confusion. Game theory is only meaningful after you specified the utility functions of the players. If these utility functions don't already include caring about other agents, the result is not what I'd call "morality", it is just cooperation between selfish entities. Surely the evolutionary reasons for morality have to do with cooperative game theory, so what? The evolutionary reason for sex is reproduction, it doesn't mean we shouldn't be doing sex with condoms. Morality should not be derived from anything except human brains.

Comment author: skeptical_lurker 22 June 2014 06:16:17PM 0 points [-]

I think this disagreement is purely a matter of semantics: 'morality' is an umbrella term which is often used to cover several distinct concepts, such as empathy, group allegiance and cooperation. In this case, the AI would be moral according to one dimension of morality, but not the others.