Tyrrell_McAllister comments on To signal effectively, use a non-human, non-stoppable enforcer - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (164)
Actually, P <=> (Q <=> P) and Q are the same in this respect (being logically equivalent, and so the same in all functional respects).
If Party 1 believes that Q, then Party 1 believes that Party 2 would cooperate. And if Party 2 believes that Q, then, "from that party's standpoint", Party 2 believes that Party 1 would cooperate. Thus, in exactly the same sense that you meant, we again have that "the outcome wi[ll] be P & Q."
But "I" cannot set the value of P <=> (Q <=> P). As my truth-table showed, the value of P <=> (Q <=> P) depends only on the value of Q, and not on the value of P. Since, as you say, I cannot set the value of Q, it follows that I cannot set the value of P <=> (Q <=> P).
Indeed, it does so reduce because the first conjunct is equivalent to Q, while the second conjunct is equivalent to P.
It is logically equivalent, but it is not equivalent decision-theoretically. Setting your opponent's actions is not an option.
I can set P. I can set P conditional on Q. I can set P conditional on Q's conditionality on P. But I can't choose Q as my decision theory.
A promise to predicate my actions on your actions' predication on my actions is not the same as a promise for you to do an action (whatever that would mean).
It is logically impossible for me to implement a course of action such that
P <=> (Q <=> P)
and
~Q
could both be accurate descriptions of what occurred. Therefore, if I do not know that Q will be true, then I cannot promise that P <=> (Q <=> P) will be true. You could force me to have failed to keep my promise simply by not cooperating with me.
This is just an issue of distinguishing between causal and logical equivalence.
If a paperclip truck overturned, there will be paperclips scattered on the ground.
If a Clippy just used up metal haphazardly, there will be paperclips scattered on the ground.
Paperclips being scattered on the ground suggest a paperclip truck may have overturned.
Paperclips being scattered on the ground suggest a Clippy may have just used metal haphazardly.
__A Clippy just used up metal haphazardly.
Therefore, a paperclip truck probably overturned, right?
Good to know Clippy hasn't read Judea Pearl yet.
Yes, pretty much kills the "Clippy is Eliezer" theory.
Not necessarily, since the "Clippy is Eliezer" theory implied not "Clippy's views and knowledge correspond to Eliezer's" but "Clippy represents Eliezer testing us on a large scale".
(I don't actually think there's enough evidence for this hypothesis, but I also don't think an apparent lack of knowledge of Pearl is strong evidence against it.)
I don't think that Eliezer would test us with a character that was quite so sloppy with its formal logical and causal reasoning. For one thing, I think that he would worry about others' adopting the sloppy use of these tools from his example.
Also, one of Eliezer's weaker points as a fiction writer is his inability to simulate poor reasoners in a realistic way. His fictional poor-reasoners tend to lay out their poor arguments with exceptional clarity, almost to the point where you can spot the exact line where they add 2 to 2 and get 5. They don't have muddled worldviews, where it's a challenge even to grasp what they are thinking. (Such as, just what is Clippy thinking when it says that P <=> (Q <=> P) is a causal network?) Instead, they make discrete well-understood mistakes, fallacies that Eliezer has named and described in the sequences. Although these mistakes can accumulate to produce a bizarre worldview, each mistake can be knocked down, one after the other, in a linear fashion. You don't have the problem of getting the poor-reasoners just to state their position clearly.
Good observation. It would barely be less subtle if Dumbledore had just said "I'm privileging an arbitrary hypothesis!" in the scene regarding Harry's parents' large rock. And when Draco said something to the effect of "I'd rig the experiments to make them come out right" after Harry asked what he'd do if an experiment showed muggle-borns were not worse at magic than pure-blood wizards, etc.
Then again, these particular instances may be explained as 1) Dumbledore has some secret brilliant plan in which the rock actually is important, and his overtly-fallacious explanation was just part of his apparent pattern of explicitly trying to model certain tropes; and 2) Draco has been trained in sophistry and fed very strong unsupported beliefs his whole life, to the point where he may not even realize that there is any purpose of experiments beyond convincing people of what one already believes. Still, I see your point.
Edit: These don't count as spoilers, do they? They don't mean much out of context (and they didn't really seem like significant plot points in context anyway).
If one wants other examples, there's a pretty similar problem in Eliezer's The Sword of Good.
ROT 13ed for spoilers: Va snpg, gur ceboyrzf jrer fb oyngnag gung gur svefg gvzr V ernq vg V fhfcrpgrq gung vg jnf tbvat gb ghea bhg gung gur qnex fvqr jnf npghnyyl tbbq va fbzr jnl. Gur fgrc gung ernyyl znqr vg frrz yvxryl jnf jura gurl ner qvfphffvat gur yvsr rkgrafvba hfvat gur jbezf nf rivy. Ryvrmre znqr vg ernyyl pyrne gung gur cevznel ceboyrz gurl unq jnf guvf jnf tebff.
As an aside, to see poor reasoning done in a very compelling way, read Umberto Eco. In particular, The Island of the Day Before and Baudolino contain extended examples of people trying to reason absent any kind of scientific framework.
I agree that I'm not "Eliezer", but I don't see what was unclear about saying that "Setting someone else's actions" is not the same as "Predicating your actions on [reliable expectation of] someone else's actions' predication on [reliable expectation of] your actions".
I agree that it is not literally correct to say that P <=> (Q <=> P) is a causal network, and that was an error of imprecision on my part. My point (in the remark you refer to) was that the decision theory I stated in the article, which you have lossily represented as P <=> (Q <=> P), obeys the rules of causal equivalence, not logical equivalence. (Applying the rules of the latter to the former results in such errors as believing that a Clippy haphazardly making paperclips implies that a paperclip truck might have overturned, or that setting others' actions is the same as setting your actions to depend on others' actions.)
A more rigorous specification of the decision theory corresponding to "I would cooperate with you if and only if (you would cooperate with me if and only if I would cooperate with you)." would involve more than just P <=> (Q <=> P).
I haven't built up the full formalism of humans credibly signaling their decision theories in this discussion, involving the roles of expectations, because that wasn't the point of the article; it's just to show that there are cooperation-favoring signals you can give that would favor a global move toward cooperation if you could make the signal significantly more reliable. If that point more heavily depended on stating the formalism, I would have gone into more detail on it in the discussion, if not the article.
This is clearer, and I now think that I understand what you meant. You're saying that humans should signal
Here, the "if and only if"s can be treated as material biconditionals, but the "expect that" operators prevent the logical reduction to "you will cooperate with me" from going through.
Meaning my reasoning skills would be advanced by reading something? So I made an error? Yes, I did. That's the point.
The comment you are replying to is a reductio ad absurdum. I was not endorsing the claim that it follows that a paperclip truck probably overturned. I was showing that logical equivalence is not the same as causal ("counterfactual") equivalence.
FWIW, I understood that you were presenting an argument to criticize its conclusion. I still think that you haven't read Pearl (at least not carefully) because, among other things, your putative causal diagram has arrows pointing to exogenous variables.
I puted no such diagram; rather, you puted a logical statement that you claimed represented the decision theory I was referring to. See also my reply here.
I thought you had because you said
I took this to mean that you were treating P <=> (Q <=> P) and Q as causal networks, but distinct ones.
You also said
I took this to mean that P was an exogenous variable in a causal network.
I apologize for the misinterpretation.
More generally, are you interested in increasing your intelligence, or do you think that would be a distraction from directly increasing the number of paperclips?
I don't follow your point. Your inference follows neither (1) logically, (2) probabilistically, nor (3) according to any plausible method of causal inference, such as Pearl's. So I don't understand how it is supposed to illuminate a distinction between causal and logical equivalence.
Nope, it follows logically and probabilistically, but not causally -- hence the difference.
Let T be the truck overturning, C be the Clippy making paperclips haphazardly, P being paperclips scattered on ground.
Given: T -> P; C -> P; P -> probably(C); P -> probably(T); C
Therefore, P. Therefore, probably T.
But it's wrong, because what's actually going on is a causal network of the form:
T -> P <- C
P allows probabilistic inference to T and C, but their states become coupled.
In a similar way, P <=> (Q <=> P) is a lossy description of a decision theory that describes one party's decision's causal dependence on another's. If you treat P <=> (Q <=> P) as an acausal statement, you can show its equivalence to Q, but it is not the same causal network.
Intuitively, acting based on someone's disposition toward my disposition is different from deciding someone's actions. If the parties give strong evidence of each other's disposition, that has predictable results, in certain situations, but is still different from determining another's output.
Well, not to nitpick, but you originally wrote something more like P -> maybe(C), P -> maybe(T). But your conclusion had a "probably" in it, which is why I said that it didn't follow.
Now, with your amended axioms, your conclusion does follow logically if you treat the arrow "->" as material implication. But it happens that your axioms are not in fact true of the circumstances that you're imagining. You aren't imagining that, in all cases, whenever there are paperclips on the ground, a paperclip truck probably overturned. However, if you axioms did apply, then it would be a valid, true, accurate, realistic inference to conclude that, if a Clippy just used up metal haphazardly, then a paperclip truck probably overturned.
But, in reality, and in the situation that you're imagining, those axioms just don't hold, at least not if "->" means material implication. However, they are a realistic setup if you treat "->" as an arrow in a causal diagram.
But this raises other questions. In a statement such as P <=> (Q <=> P), how am I to treat the "<=>"s as the arrows of a causal diagram? Wouldn't that amount to having two-node causal loops? How do those work? Plus, P is exogenous, right? I'm using the decision theory to decide whether to make P true. In Pearl's formalism, causal arrows don't point to exogenous variables. Yet you have arrows point to P. How does that work?