Nisan comments on Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model - Less Wrong

46 Post author: Yvain 04 August 2010 09:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread. Show more comments above.

Comment author: xamdam 04 August 2010 03:11:28PM *  4 points [-]

Interesting theory.

  • I tend to agree with Tim Tyler that the "common" interpretation of consciousness is simpler and the signaling thing is not necessary. I realize that you are trying to increase the scope of the theory, but I am not convinced yet that the cure is better than the illness.

  • While I could see why an ape trying to break into "polite society" might want to gain the facility you describe, the apes created the "polite society" in the first place, I do not see a plausible solution to the catch 22 (perhaps it's a lack of imagination).

  • You raise the question of U not being able to be a complete hypocrite, therefore "inventing" C, who is good at lying to itself. But wouldn't others notice that C is lying to itself, and remains largely a hypocrite? If the achievement of C is being more cooperative, why doesn't U just skip the BS and become more cooperative? (I actually think this last point might be answerable, the key observation being that C operates on a logical, verbal level. This allows it to be predictably consistent if certain specific situations, such as "if friend do this", which is very important in solving the kinds of game-theoretic scenarios you describe. Giving cooperation over to C, rather than "making U more cooperative" creates consistency, which is essential. I think you might have hinted at this.)

ETA: the theory might be more palatable if the issues of consciousness and "public relation function" are separated along byrnema's lines (but perhaps clearer).

Comment author: Nisan 05 August 2010 10:37:41PM 0 points [-]

Regarding your second two points, the idea of signalling games is that as long as C has some influence on your behavior, others can deduce from your apparent trustworthiness, altruism, etc., that you are at least somewhat trustworthy, etc. If you did away with C and simply made your U more trustworthy, you would seem less trustworthy than someone with a C, and other agents in the signalling game would assume that you have a C, but your U is unusually untrustworthy. So there's an incentive to be partially hypocritical.