pwno comments on Lie to me? - Less Wrong

-1 Post author: pwno 24 June 2009 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: pwno 24 June 2009 11:03:08PM 0 points [-]

To a person who values the truth, knowledge is a benefit and will therefore be part of the AI's 'net benefit' calculation.

But knowledge of the truth has a finite value. What if the AI believed that the benefit of a lie would outweigh a truth-seeker's cost of being lied to?

So the question is, would any rational truth-seeker choose to only be told the truth by the AI?

Comment author: Furcas 25 June 2009 12:20:43AM 0 points [-]

A person doesn't have to 'infinitely value' truth to always prefer the truth to a lie. The importance put on truth merely has to be greater than the importance put on anything else.

That said, if the question is, is there a human, or has there ever been a human who values truth more than anything else, the answer is almost certainly no. For example, I care about the truth a lot, but if I were given the choice between learning a single, randomly chosen fact about the universe, and being given a million dollars, I'd pick the cash without too much hesitation.

However, as Eliezer has said many times, human minds only represent a tiny fraction of all possible minds. A mind that puts truth above anything else is certainly possible, even if it doesn't exist yet.

Comment author: pwno 25 June 2009 01:08:02AM 0 points [-]

Now that we know we programmed an AI that may lie to us, our rational expectations will make us skeptical of what the AI says, which is not ideal. Sounds like the AI programmer will have to cover up the fact that the AI does not always speak the truth.