RichardKennaway comments on You have a set amount of "weirdness points". Spend them wisely. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
This is called "concern trolling".
It isn't "borderline Dark Arts", it's straight-out lying.
This imagines the plan working, and uses that as argument for the plan working.
I was not aware that it had a name; thank you for telling me.
Agreed. The question, however, is whether or not this is sometimes justified.
Well, no. It assumes that the plan doesn't fall prey to an obvious failure mode, and suggests that if it does not, it has a high likelihood of success. (The idea being that if failure mode X is avoided, then the plan should work, so we should be careful to avoid failure mode X when/if enacting the plan.)
The failure mode (people detecting the lie) is what it would be for this plan to fail. It's like the empty sort of sports commentary that says "if our opponents don't get any more goals than us, we can't lose", or the marketing plan that amounts to "if we get just 0.001% of this huge market, we'll be rich."
See also. Lying is hard, and likely beyond the capability of anyone who has just discovered the idea "I know, why not just lie!"
That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where "trivial" is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. "Well obviously this is an easy way for the plan to fail, but getting past that..." The reason for this was because there might be more intricate/subtle failure modes that I've not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan's success. This, incidentally, is why I think your analogies don't apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, "Well, if our star player isn't sick, we stand a decent chance of winning," with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I'd say you're being unrealistically optimistic.)
Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here?
(I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about one's mental state, which is unlikely to be instrumentally important anyway, in an attempt to improve the other person's epistemic accuracy with regard to the world around them, a far more instrumentally useful task, seems like it might actually be morally justifiable.)
Lying also does heavy damage to one's credibility. The binary classification of other people into "honest folk" and "liars" is quite widespread in the real world. You get classified into "liars", pretty hard to get out of there.
Well, you never actually say anything untrue; you're just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I'd want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their "level" or magnitude does).
This is not a court of law, no proof required -- "it looks like" is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to "it looks like" without any prior evidence? That seems like major privileging the hypothesis. I mean, if you weren't already primed by this conversation, would you automatically think "They might be lying about being unconvinced" if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn't sound too hard to keep the fact that you already believe something concealed. It's just like passing the Ideological Turing Test, in a way.
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn't necessarily rise to the conscious level of a verbalized thought "They might be lying...", but manifests itself as a discomfort and unease.
It's certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW... You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Fair enough. How about online discourse, then? I doubt you'd be able to pick up much nonverbal content there.
Yes. It is.
That's not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like "how likely is it that it could go wrong?" is a planning fallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that's what we've evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you're only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, "ok, that's your story, but just how much do I actually believe it?", or "that was the most inept attempt to plant a rumour I've ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you're probably hoping I'll pass this on to." If it's a matter that does not much concern me, I won't even let person A know they've been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. "As do all atheists", their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
In what manner do you think it is most likely for this to occur?
If possible, could you outline some contributing factors that led to you spotting the lie?
It sounds like you're implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That's, really really really stretching my capacity to believe. Either you're unique with this ability, or you're also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Emphasis added:
In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else's. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed.
On the whole, people are as good at lying as, on the whole, people are at finding them out, because it's an arms race. Some will do better, some worse; anyone to whom the idea, "why not just lie!" has only just occurred is unlikely to be in the former class.