John_Maxwell_IV comments on To signal effectively, use a non-human, non-stoppable enforcer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (164)
This is logically equivalent to, and hence carries no more information or persuasive power than
This may be checked with the following truth-table:
Let P = I would cooperate with you.
Let Q = You would cooperate with me.
Then we have
First of all, we need to start making a distinction between you what you predict I'll do and what I'm signaling I'm going to do. Quick-and-dirty explanation of why this is necessary: If you predict I'll cooperate but you're planning to defect, I'll signal to defy your prediction and defect along with you.
I think clippy's statement should be
Detailed explanation follows.
There are four situations where I have to decide what to signal:
I want to cooperate in situation 1 only, and none of the other situations.
Truth table key:
Truth table:
So basically, the signaling behavior I described (cooperating in situation 1 only) is the only possible behavior that can truthfully satisfy the statement
Note that there is a signal that is almost as good. Signaling that I will cooperate if (you predict I'll defect and you're planning to cooperate) is almost as good as signaling that I'll defect in that situation. Using this signaling profile, broadcasting one's intentions is as simple as saying
My guess is that the first, more complicated signal is ever-so-slightly better, in case you actually do cooperate thinking I'll defect--that way I'll be able to reap the rewards of defection without being inconsistent with my signal. But of course, it's very unlikely for you to cooperate thinking I'll defect.
Should the word "signal" be part of the signal itself? That seems unnecessarily recursive. Maybe Clippy's recommendation should be that I ought to signal
This does seem more promising than Clippy's original version. Written this way, each atomic proposition is distinct. For example, "you're planning to cooperate with me" doesn't mean the same thing as "you would cooperate with me". One refers to what you're planning to do, and the other refers to what you will in fact do. Read this way, the signal's form is
S <=> ((Q <=> P) & R),
and I don't see any obvious problem with that.
However, you would seem to render it in the propositional calculus as
S <=> ((Q <=> P) & Q),
where
P = You predict I'll cooperate,
Q = You're going to cooperate,
S = I will cooperate.
(I've omitted the initial "I'm signalling" from your rendering of S, for the reason that I gave above.)
Now, S <=> ((Q <=> P) & Q) is logically equivalent to S <=> (Q & P). So, to signal this proposition is to signal
As you say, this seems very similar to signalling
In fact, I'd call these signals functionally indistinguishable because, if you believe my signals, then either signal will lead you to predict my cooperation under the same circumstances.
For, suppose that I gave the second, apparently weaker signal. If you cooperated with me while anticipating that I would defect, then that would mean that you didn't believe me when I said that I would cooperate with you if you cooperated with me, which would mean that you didn't believe my signal.
Thus, insofar as you trust my signals, either signal would lead you to predict the same behavior from me. So, in that sense, they have the same informational content.
I guess. Or maybe I'm a masochist ;)
I accept all your suggested improvements.