Gunnar_Zarncke comments on The Value Learning Problem - Less Wrong

16 Post author: So8res 29 January 2015 06:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 29 January 2015 10:51:25PM 1 point [-]

Even a system smart enough to figure out what was intended is not compelled to act accordingly: human beings, upon learning that natural selection ``intended" sex to be pleasurable only for purposes of reproduction, do not thereby conclude that contraceptives are abhorrent.

This seems like a distracting example that is likely to set off a lot of people's politics behavior.

For instance, it may be misread as saying that humans who don't draw that conclusion are somehow broken.

Comment author: Gunnar_Zarncke 30 January 2015 03:16:53PM 3 points [-]

I was just about to post this quote as a quite well-chosen example which uses an easily understood analogy to defuse all those arguments that an AI should be smart enough to know what is 'intended' in one quick sweep (one might say yudkowskyesk so).

Comment author: hairyfigment 30 January 2015 06:33:34PM -1 points [-]

*yudkowskily

Comment author: Vulture 05 February 2015 11:51:44PM *  0 points [-]

I think the word Gunnar was going for was "Yudkowskyesquely", unfortunately.