wedrifid comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 17 June 2010 03:10:21PM *  1 point [-]

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents.

Yes.

For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it?

Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.

Would a FAI?

Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.

Comment author: Lonnen 17 June 2010 04:32:29PM 2 points [-]

That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:

Eliezer on Informers and Persuaders

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less. It's a pity that this wonderful excuse exists, but in the real world, well...

It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.

Comment author: wedrifid 17 June 2010 06:41:33PM 5 points [-]

I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.