Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheOtherDave comments on The curse of identity - Less Wrong

125 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (298)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 17 November 2011 03:04:18PM 1 point [-]

By "caring about truth" here do you mean wanting systems to make explicit utterances that accurately reflect their actual motives? E.g., if X is a chess-playing AI that doesn't talk about what it wants at all, just plays chess, would a person who "cares about truth" would also be motivated to give X the ability and inclination to talk about its goals (and do so accurately)?

Or wanting systems not to make explicit utterances that inaccurately reflect their actual motives? E.g., a person who "cares about truth" might also be motivated to remove muflax's AI's ability to report on its goals at all? (This would also prevent it from winning Diplomacy games, but we've already stipulated that isn't a showstopper.)

Comment author: Oscar_Cunningham 17 November 2011 06:38:35PM 0 points [-]

I intended both (i.e. that they wanted accurate statements to be uttered and no inaccurate statements) but the distinction isn't important to my argument, which was just that they want what they want.