Kaj_Sotala comments on Thoughts on moral intuitions - LessWrong

39 Post author: Kaj_Sotala 30 June 2012 06:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (199)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 29 June 2012 10:07:42AM 0 points [-]

Thanks, I edited the sentence to be clearer on that: "...that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes."

Comment author: mwengler 02 July 2012 09:33:47AM *  1 point [-]

That's a fun result.

Years ago, I had a "spiritual person" telling me about how god could help me if I prayed to him. Wishing to make a point by metaphor, I told him "it seems to me that god is just santa clause for grown-ups." "Yes," he responded, "santa clause gives kids what they want, god gives you what you need."

If only clever repartee established truth, then Stephen Colbert would be the last president we would ever need.

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

Comment author: wedrifid 02 July 2012 09:54:06AM *  6 points [-]

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

It isn't going to think the CEV is an absolute morality - it'll just keep doing what it is programmed to do because that is what it does. If the programming is correct it'll keep implementing CEV. If it was incorrect then we'll probably all die.

The relevance to 'absolute morality' here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.