AKA "The Art Of Controversy" AKA "The Art Of Always Being Right" AKA "Dialectic Eristic". Here's a pretty fun, illustrated version of the text, in actual Troll terms]. Here's an audiobook.

EDIT: In this article I adopt a bit of a Devil's Advocate attitude. I'm not entirely convinced of what I'm suggesting, but I'll try to give it my all to make it look at least worth considering. I might get carried away at some points and overtly relish the villainy like a mad Britannian prince, which is unsightly, and, more importantly, unwarranted, so please forgive that. I'll leave those elements in, so that this is a Self Demonstrating Article.

So, the rationale is as follows: sometimes you get in an argument with someone. You're not quite sure you're right. You're not quite sure he's right. Even if you play fair, there's no guarantee it's the truth that'll come out. A few hours later, you could think up of an argument that would have saved your cause, you just failed to think of it during the discussion itself. And usually it's not just a matter of finding the truth.

First, it's a matter of "being right". If you want to clash intellects, there's no more violent, crude, intimate way than this. When you're proven wrong in a discussion, especially in public and in a way that makes you look like an idiot, your ego could get hit hard. Not to mention your status. Back when this book was written, people killed themselves, and each other, over this stuff.

Second, beside your own pride and life, there might be stuff bigger than yourself riding on this. You just can't afford to stick to the truth, or to give up just because the other side has better arguments. You gotta win, in the eyes of the public, no matter what.

This book makes a fairly good job of singling out different tricks to bullshit your way into winning an argument. Or at least stall for time and take your opponent off-balance and distract them while you think of something legitimate to say. Let's review a non-comprehensive list of the tricks he proposes (the cartoon site and the full text are much more adequate, having one or many examples per case and being very eloquently phrased by the writer himself.

Let's classify them by blocks:

  • Attacks to the opponent's morale:
    1. being an openly unfair and insolent prick just to piss them off. 
    2. claiming vicotry in an authoritative, assertive voice, despite the argument going against you
    3. interruptions and diversions, derailing
    4. if they're angry about some particular argument, rub it against their faces until they lose it: it's probably a weak point in their defense. Same thing if they are being evasive.
    5. invoking arguments that use obscure sources and are hard to check
    6. appeal to consequences: show them that defending their argument means going against their own interests in a way they didn't think of. They'll drop it like a hot potato.
    7. confuse the hell out of your opponent through nonsensical pompous speech that sounds authoritative
    8. personal attacks and insults
  • Strawmanning (making the opponent say something they didn't actually say, amking their position look worse than it actually is): then attacking the strawman. Also, making your position look better than it is.
    1. By overgeneralization and slippery slopes
    2. Exploiting double meanings, homonyms, unclear definitions
    3. Using loaded words, buzzwords, and guilt by association.
    4. Using false dichotomies ("with us or against us") and other false syllogisms to extract outrageous things from your opponent's proposition that weren't even there in the first place
  • Checkmate: the cleaner sort of tactic, and the most humiliating, they rely on making the opponent sabotage themselves. The favored type of method of the Ace Attorney games as well as the more heroic court drama.
    1. Getting the opponent to admit to your premises (or even the premises of your premises) one after the other, without letting them know that they lead to your conclusion all along. Then draw it. It's safer to draw it yourself, but it's more fun to make *them* draw it and then watch their expressions as the absolute horror of what they've just done dawns on them. Mwa ha ha. One way of doing this is by using questions, Socrates-style, possibly in disorder so they stay off-balance. An especially fun way to do this is getting them to say no to propositions you fake needing him to agree on, then submitting the antithesis of what they just negated, which they'll have no choice but to admit.
    2. Using one counterexample to blow up an entire generalization, which crumbles like a house of cards. Especially effective if the counterexample is a Black Swan your opponent isn't familiar with.
    3. Using their very arguments against their thesis, mostly by pointing out implications they missed. Especially fun if the argument is false in the first place, but is part of the core dogma of whatever cult, party, or group the opponent pledges allegiance to.
    4. Angering the opponent into strawmanning their own position through exaggeration by way of exasperated reaction to your incessant bugging. ("YOU CAN'T HANDLE THE TRUTH")
  • Jumping to conclusions:
    1. Making the opponent admit to the premises, then making the conclusion yourself, sometimes by generalizing his admissions to specific cases as admissions to a general truth
    2. Begging the question
    3. Using a faulty proof to reject the wole proposition
  • Just plain make do with bullshit:
    1. Your opponent uses a sophistic nonsense argument. Instead of taking the time of exposing it for what it is, you just counter it with bullshit of your own
    2. Appeal to authority rather than reason.
    3. "It applies in theory, but not in practice". If the theory does not apply, it means it is wrong, period.
  • Escapes and getaways, sometimes of the cowardly sort:
    1. Petitio Principi: refusing to admit an argument that would immediately lead to the opponent's desired conclusion by exploiting the fact that your opponent and audience didn't notice that little step and confused the premise with the conclusion and claiming it begs the question. One of the more subtle of the bunch.
    2. Defense by subtle distinction: if your opponent has blasted part of your proposition, claim to have been misunderstood, and squeeze and narrow the original proposition down to something your opponent didn't get to disprove. Save face, salvage what you can.

I'm just surprised Schopenhauer isn't an Internet idol by this point. I'm also pleasantly surprised at how our discussions avert most of this stuff, most of the time. Then again, our motivations are different from the usual, are they not? But what about our relation to the general public? Suppose one of us accepted an invitation at O'Reilly's? What about convincing people to donate when there just isn't time to convince them of how important our cause is or how we are the right people to carry it out (not to mention we're not quite consensual or certain on either ourselves)?

Spartans were famed for their laconic way of communicating. In fact, the term was named after them. It was an actual course in their education: teachers would mock them and provoke them, and the kids would be punished harshly unless they could respond quickly, forcefully and wittily. I think we should train ourselves on this. There is a time and a place for careful deconstruction of the opponent's arguments, and careful weighting of what is right and wrong. There's another for trusting in the heuristics you're following and acting on them now. Sometimes you just have to win, and worry about the truth later. So we should learn to identify when exactly the gloves should come off, and learn how to take them off quickly, so that we are never taken off-guard. If the very existence of humanity is riding on this project, I think a little verbal swashbuckling is the *least* we can allow ourselves in terms of consequentially moral leeway.

Not that just sticking to the truth is entirely ineffective, but opponents aren't always as malleable as the one in that example, we're not all as smart and witty as Eliezer, and sometimes the inferential distances are just too huge not to resort to Dan Browned, Conviction By Counterfactual Clue, or Lies To Children for the sake of expediency (there's an entire rule in Schopenhauer's book dedicated to the case of debating of technical matters before an untrained public, and he provides a really good example, to boot).

 

This article suggests that learning about, and perhaps embracing the dark arts may be a useful if not outright necessary necessary means to achieve our goals. The author, on the other hand, isn't so sure. However, at the very least, I think we should know about this stuff, if only in a Defense Against The Dark Arts way, and make and study a list of similar, more contemporary works that would give us a better results-to-time-investment ratio in learning these tricks and others, and, more importantly, their counters.

 

BTW, Robert Greene's books, despite being rather unscientific, are very promising in that regard. Their advice is fairly useless if you want to apply it, but once you've gone through all the contents (and there's a lot of stories there) you'll be on guard against practically anything: it's really hard to beat the Epic Fails he lists there, which are all the more epic because usually they involve smart, perceptive, strong, powerful people, and they all still fall for the exact same tricks, over and over again. If only because they are fascinating narrative anthologies, and a very fun intellectual read, and we are very much in favor of fun and intellectualism, right?

Also, for those that have followed this article from the start, notice how the successive rewrites make it a self-demonstration of the "defense by subtle distinction" rule. Whether its use here was legitimate or not is left to the reader.

 

EDIT: As usual TVTropes never fails to pleasantly surprise. Here is their wittily written, fairly comprehensive list of fallacies: they called it You Fail Logic Forever. Remebember that fallacies are just part of the Dark Arts of Winning Debates, and a very dangerous bluff if your opponent calls you out on them, second only to counterfactual arguments.

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 7:04 PM

I have a fear that becoming skilled at bullshitting others will increase my ability to bullshit myself. This is based on my informal observation that the people who bullshit me tend to be a bit confused even when manipulating me isn't their immediate goal.

However, I do find that being able to authoritatively blame someone else who is using a well-known rhetorical technique for doing that is very useful, and therefore I have found reading "Art of Controversy" to be very useful. The obviously useful skill is to be able to recognize each rhetorical technique and be able to find a suitable retort in real time; the default retort is to name the rhetorical technique.

Why shouldn't you want to bullshit yourself? You'll get to believe you are the most successful man on earth, even after getting evicted. Your children will all be geniuses who will change the world, even after flunking out of high school. Your arguments will be untouchable, even after everyone else agrees you lost. Obviously, I believe said fear is highly legitimate, if the premise is true.

People who are talking bullshit do generally seem to be confused in my experience as well, but BS being caused at least in part by that confusion seems to be a highly likely scenario. Some things done in an external setting do affect similar internal processes, but not all.

An (quick and dirty) inductive argument follows:

Premise 1: It is far easier to BS than to logically analyze and respond. Premise 2: It is far faster to BS than to logically analyze and respond. Premise 3: People prefer to do things that are easier, ceteris paribus. Premise 4: People prefer to do things that are faster, ceteris paribus. Premise 5: People very strongly do not want to be wrong. Premise 6: Losing the argument is a significant proxy for being wrong. Premise 7: Winning the argument is a significant proxy for being right.

(Intermediate)Conclusion 1: If BS wins you the argument, you will prefer BS to logical analysis and response. (Intermediate)Conclusion 2: If BS loses you the argument, you will regard BS far more poorly as an option. (Intermediate)Conclusion 3: Being good enough at BS to consistently win (necessarily avoid losing) arguments drastically increases the chance you will not resort to logical analysis and response, at all. Final Conclusion: If you BS to others, you will BS to yourself.


On the idea that it is useful to know when another is using one of the devices of blowing smoke, you are obviously correct, but it can be very tempting to misuse such knowledge simply to browbeat your opponent, when they haven't actually done it. In a similar vein (though not directly on topic), sometimes a fallacy isn't really a fallacy in the precise context it is within (IE sometimes the appeal to authority is legitimate in an argument, especially to settle a minor point).

I must say one thing on the idea behind all this. While the ends occasionally justify the means, the idea that rational ends are best served via irrational means is extraordinarily likely to be incorrect. More likely, an inability to properly argue your point should have you questioning your point instead.

an inability to properly argue your point should have you questioning your point instead.

When dealing with trolls, whether on the Internet or in Real Life, no matter how absolutely damn sure you are of your point, you have no time to unravel their bullshit for what it is, and if you try it you will only bore your audience and exhaust their patience. Debates aren't battles of truth: there's publishing papers and articles for that. Debates are battles of status. If you manage to come off as the one with higher status, people will listen more to what you said during the debate, and, more importantly, to what you said afterwards.

A very interesting way of taking advantage of this and neutralizing the effects of the dirty fighting would be to immediately afterwards publish a play-by-play analysis of the discussion, using the opportunity as an occasion to teach those who were impressed by you and went to see your work how debate really works. You could even go so far as actually listing the arguments you and your opponents use, and openly admit it if your opponent's arguments are good enough that they have caused you to actually undertake a Bayesian update. That way, you show that:

*You're smart, witty, and charismatic enough to win the debate.

*You're rational, honest, and moral enough to admit to the truth afterwards.

When dealing with trolls, whether on the Internet or in Real Life, no matter how absolutely damn sure you are of your point, you have no time to unravel their bullshit for what it is, and if you try it you will only bore your audience and exhaust their patience. Debates aren't battles of truth: there's publishing papers and articles for that. Debates are battles of status.

I agree. There's also the scenario where you're talking to a reasonable person for the purpose of figuring out the truth better than either of you could do alone. That's useful, and it's important to be able to distinguish that from debating with trolls for the purpose of gaining status. Trolls can be recognized by how often they use rhetoric that obviously isn't truth-seeking, and Schopenhauer is very good for that.

Well, actually, on the Internet you never gain status by debating with trolls. Even if I win an argument, I lose status to the extent my behavior justifies the conclusion "Tim wastes time posting to (LessWrong|SlashDot|whatever) instead of doing anything useful."

My ability to identify and stonewall trolls varies. Sometimes I catch them saying something silly and refuse to continue unless they correct themselves, and that stops the time-waste pretty quickly. Sometimes I do three-strikes-and-your-out, and the time-waste stops reasonably soon. Sometimes it takes me a long time to figure out if they're a troll, especially if they're hinting that they know something worthwhile. I wish I had a more stable rule of thumb for doing this right. Any suggestions?

That's okay for Internet trolls, but sometimes you'll have to confront people in Real Life. These people won't be aiming to make a point, they'll be aiming to discredit you, by whatever means necessary.

When I wrote this article, one of the scenarios I had in mind was "What if I was forced to confront Bill O'Reily (or some similarly hostile, dirty opponent) on the topic of Less Wrong, and how do I not only "not lose points" but actually come out making us look even cooler than before? Bonus point if he loses status, not among those who already despise him, but among his own fans". Ideally destroying his career, but that's a pretty big dream.

my informal observation that the people who bullshit me tend to be a bit confused even when manipulating me isn't their immediate goal.

True story. I know a girl that has completely lost the ability to distinguish between her lies and reality. For example, if for some reason she says she doesn't like an item of food that she is known to like, just to piss off her parents, she will henceforth always act as if she hates it. If you slip it into the food and she aks what's making the food so delicious, and you tell her what's in it, she will immediately stop liking it even though she was relishing it a minute ago.

That's just one of the examples I can summon. She believes in her bullshit very strongly on a conscious level, but subconsciously, what is true remains so, and this leads to some very amusing behavior (amusing because she insists she is fine the way she is and is generally a very obnoxious person).

I feel the text you wrote would have worked better with, say, 4 paragraphs arguing in favor of learning to BS an argument, and 5 paragraphs reviewing or summarizing the work; instead of all 9 paragraphs defending the necessity of learning these skills.

You're absolutely right. Upvote. However, note that the first three paragraphs were actually a summary of the parts of the work that were "about" the work itself. I thought the links I provided at first (especially the one with the cartoons) would serve fine, but I notice this is just plain lazy. I'll soon hereafter remedy this.

Edit: remedie'd, and was a greeat opportunity to order my thoughts here, especially since many of the rules in the book were fairly redundant and could easily be lumped together.

Some of these which are labeled as bad aren't necessarily. For example, getting someone to accept a bunch of premises and then pointing out the conclusion is one way you can actually get people to change their opinions. I know I've had my opinion changed that way. Similarly, the use of Black Swan is questionable- one person might see something as a Black Swan where another will see it as a basic counterexample that needs to be dealt with. The dividing line is not at all clear.

I did say the "checkmate" methods were the most legitimate. This is "Hoy To Be Always Right", not "How To Always Be A Demagogical Sophist": the parts that involve manipulating the opponent without actually resorting to lies and fallacies are the most satisfying, since those are the parts where it's more clearly "intellect vs intellect", and the parts where you actually know yourself to be right. A clean fight is a good fight. As for Black Swans, they are obviously subjective in that they depend on how well-informed the receiving party is. Usually we talk about Black Swans when it's the enitre scientific community that's taken aback by a paradigm breaker.

I personally don't intend to use these techniques because I already have a hard enough time getting called on my mistakes. On multiples occasions I have convinced people of unintuitive correct contrarian ideas while forgetting to mention a crucial and nonobvious premise. If anything, I need to lose arguments more often.

In an ideal world, I'd agree with you, but sometimes, especially in live conversation, let alone public debate, you just don't have the time to go through all the premises and the syllogisms. Plus, it's unfair to expect everyone to be able to properly counter your arguing and point out its flaws. That's something you should reserve to your Worthy Opponent of choice. Wasn't there a figure like that in (Hassidic?) Judaism? A "comrade of studies" or something like that?

I don't think my advice necessarily applies to everyone, I just thought that this is an important con to consider. For me, the cons currently outweigh the pros.

That's something you should reserve to your Worthy Opponent of choice. Wasn't there a figure like that in (Hassidic?) Judaism? A "comrade of studies" or something like that?

I want one! This could be a useful rationalist institution too.

That's something you should reserve to your Worthy Opponent of choice. Wasn't there a figure like that in (Hassidic?) Judaism? A "comrade of studies" or something like that?

In many forms of Judaisms one often studies with a chavruta, with whom one will debate and engage the same texts. Such individuals are generally chosen to be about the same background level and intelligence, often for precisely the sort of reason you touch upon (as well as it helping encourage them to each try their hardest). In modern times, as the levels of interest have become much more divorced from the level of actual knowledge (due to the baal tshuvah movement as well as some other modern social effects) this last aspect has broken down somewhat.

Care to elaborate? A cursory reading of the article doesn't reveal any mentions of the topic's effect on the chavruta institution. I'm not sure if you mean that highly intellectual Jews are more enthusiastic about "returning to their roots" or the inverse, that Jews with very little academic level have invaded the Synagogues in a religious version of Eternal September.

Something closer to the Eternal September, but a little more complicated than that. (Disclaimer: I don't have any sources for what I'm about to say. I'm more generalizing based on my own experience when I was Orthodox and the general impression of the community.)

One has among those who have become Orthodox a large number of very different people. Some of them are very intelligent but have little to no background knowledge. Others are not so bright and have no background. Others have are not so bright and have a little background, etc. Moreover, the general lack of background means that most of them can't form chavrutas on their own, since they didn't grow up with the large amount of basic experience about how the system works, what sort of approaches work and which don't. Much of that knowledge is procedural and not stated explicitly. So, as a result, a lot of these people are pairing with people of much more background knowledge than they have but might be not as bright. There are other complicating factors; for example, some Orthodox Jews form chavrutas with less religious, less educated, Jews deliberately trying to rope them in further.

The whole situation is really quite complicated, and there's an unfortunate lack of serious anthropological or sociological work on what is happening at a broad level, so I don't have any thing to rely on other than my own impressions.

Would you care to repost this on the chavruta thread? I think this system could pique our interest, and if we're going to emulate it we might as well learn more about what works and what doesn't, what fits us and what doesn't, and how we can improve on it in our own special way and make it ours.

logicen.fr no longer has the image... it says 404 D:

Indeed, the ends sometimes justify the means - but I'm not sure how bashing irrationalists with cheap (or expensive) rhetoric is helping our cause. Where lies the value in convincing people who afterwards won't be useful for "our cause" anyway (we're talking fAGI I assume)?

The "intellectual high-rollers" should be our major target group, and while they surely aren't completely immune to rhetoric themselves, remember that bad arguments will reflect badly on us in the eyes of those people we should care the most about. I'm not sure which role the public perception of this issue will play in the future, but I'd just stick to simple but true arguments. You don't have to present your whole convoluted line of reasoning to make a simple, quick point like "dying sucks". Personally I'd just avoid black holes of retardation like Fox News - we'll probably be much better off by being invisible to Fox News' target-group. I don't see any value in engaging such people directly, but only a sizable potential for downfall. Keep a low profile towards the enemies of reason and don't engage unless necessary - that would be my preferred tactic. Pandering to the religious and the irrational will accomplish nothing, or at the very least I'm convinced the costs will outweigh the marginal benefits.

Where lies the value in convincing people who afterwards won't be useful for "our cause" anyway (we're talking fAGI I assume)?

It's called "getting vote and then passing the bills". We get people to vote for us for what they thought they heard us say. We'll pass the bills that are about what we actually said. Doing away with metaphors: we get them to actually listen to what we have to say by giving a great first impression, then once we have them captivated we start showing them the fine print, most notably the bits about how much they suck and they need to change and if they listen to us everything in their lives will be better and they'll be spiritually and emotionally more fulfilling and awesome. Parties do this. Religions do this. Universities do this. Parents do this. Lovers do this. Why should we be any different? Once we get to the point where we can persuade them that talking about rationality in clown suits is a perfectly reasonable idea, the rest is pretty much done.

Parties do this. Religions do this. Universities do this. Parents do this. Lovers do this. Why should we be any different?

Um. Because we want to be different from political parties and religions?

How about the other two? And we don't just want to be different, we want to change them. Rationalists should win, even if our winning conditions might not be what our adversaries expect.

I suspect we have a very different conception of how the future is going to pan out in terms of what role the public perception and acceptance of AGI will play.

I understand your point: Lure em' in with happytalk, then bash em' with a rationality course. ("Excuse me Miss, how would you like a free rationality test"?) However, I simply don't think that we can positively prepare a sizable proportion of the public (let alone the GLOBAL public) for the arrival of AGI by simply teaching rationality skills. I believe our idea of the future will just continue to compete with countless other worldviews in the public memesphere, without ever becoming truly mainstream until it is "too late" and we face something akin to a hard takeoff.

I don't really think that we can (or need to) reach a consensus within the public for the successful takeoff of fAGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren't some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I prefer to face apathy over antagonism when push comes to shove - and since AGI could spring into existence very rapidly and take everyone apart from "those in the know" by surprise, I would hate to lose that element of surprise over our potentially numerous "enemies".

Now of course I don't know which path will yield the best result: confronting the public hard and fast or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will have widely diverging opinions, because we simply lack a way to accurately model how people will behave upon encountering the potential threat of AGI (especially so far in advance). Just remember, that the world doesn't consist entirely of the US and that fAGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer's or whoever's garage. (Which is why I'd strongly advocate a semi-covert international effort when it comes to the development of friendly AGI)

It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and fAGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop this sucker together. (Think CERN). To be honest, I can't think of any other approach to develop the later stages of AGI that doesn't look doomed from the start (not doomed in terms of being technically unfeasible, but doomed in terms of significant others thinking: "we're not letting this suspicious organization/country take over the world with their dubious AI". Remember that AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)

So from my point of view the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture. "People" don't get a say in weapons development, and I predict they won't get a say when it comes to AGI. (And we should be glad they don't if you ask me.)

PS: When you're just talking about teaching rationality to people however, the way to go is to lobby it into the school curriculum as a full-blown subject. Every other plan to educate the public on "all things rational" completely pales in terms of effectiveness. Teaming up with the skeptics and the "new" atheists may be very helpful for this purpose, but of course we should never let ourselves be associated with such "ungodly" worldviews while advertising our rationalistic concepts.

Very interesting post overall. Ciuld you refer me to article about this particular problem? I feel humans should be allowed to choose their collective destiny together, but I don't know whether it's such a bad idea to hide it from them if it will result in this. Are we on the way to becoming the new Project Manhattan?

And yes, getting it into the curriculum is great, but first we need to train teachers, and the teachers' teachers, etc. and develop a pedagogy that works with kids, who are infamous for not beling able to make the distinctions we make or assimilating the concepts we assimilate, at certain ages, so it'd have to be really fine-tuned to be optimal.

I've rewritten my comment and posted it as a standalone article. I've somewhat improved it, so you may want to read the other one.

I am not aware of any articles concerning the problem of how we should approach self-improving AGI, I was just hacking my personal thoughts on this matter into the keyboard. If you are talking about the potentially disastrous effects of public rage over the matter of AGI, then Hugo de Garis comes to mind - but personally I find his downright apocalyptic scenarios of societal upheaval and wars over AI a bit ridiculous and hyper-pessimistic, given that as far as I know he lacks any really substantial arguments to support such a disastrous scenario.

EDIT: I have revised my opinion of Hugo's views after watching all parts of his youtube interview on the following YTchannel: http://www.youtube.com/user/TheRationalFuture. He does make a lot of valid points and I would advise everyone to take a look in order to broaden one's perspective.

The author seems to be saying that shady means can be used to achieve noble ends. I agree. However, consider these three possibilities: (1) Being too honest is (on average) worse than being dishonest. (2) Being too dishonest is worse than being too honest. (3) Each error is equally harmful. (We could say that the first two involve asymmetric loss functions.

I think the author wants us to consider the first possibility. If honesty hurts more than dishonesty, then let's aim for more dishonesty.

But even if this possibility is true in the near-term, there is a clear benefit to committing to honesty. People trust those who have a record of honesty. Thus there is long-term eristic value to be gained by sacrificing short-term eristic success.

People trust those who have a record of honesty.

Counterexample: this. Voting. And any and every Politics Is The Mindkiller phenomena. Lord Byron and all the other vamps of both sexes who are reputedly "mad, bad, and dangerous to know" (they ain't just fictional, any PUA, nay, any historian will tell you that). Among many other things.

Actual honestly is worth little to the general public, especially when their hearts are captred, their minds seduced by the possible materialization of fantasies that might even not be their own, planted by the seducer. Our program involves changing this. Using the Dark Arts to achieve a world of mental hygiene and clear thinking is the problem, because the means, on the long term, detracting from the goal.

On the other hand, the closer we are to the goal, the less necessary the means will become, and the more they can be potentially hurtful to the goal if people, having gained enlightenment thanks to our effort, look back on what we did and think "they manipulated us" rather than "they told us what we needed to hear", and develop a Romanticism sort of backlash against our Enlightenment work.

Your counterexamples are valid; they show that dishonesty doesn't always breed distrust among everyone. Specifically, they fail to breed distrust among those who (on some level) want beliefs that oppose reality. I suppose we all fit into this group at times.

The possibility of a Romanticism-like backlash against rationalism is one disadvantage to using deceptive rhetoric, but that assumes the happy situation that rationalism will one day become widespread. I fear that deceptive rhetoric would help prevent that happy situation from obtaining. The use and endorsement of Dark Arts could pose a PR problem for LW even before the "enlightenment" got around.

LW might not be a cult, but deceptive rhetoric is a stereotype of cults. Why make it easier for others to peg LW as one?

Okay, I'm going to be blunt here. While I'm on a Devils Advocate position, I'll have you notice that Devil's Advocacy is a social process, and an important, non-trivial one at that, as Yudkowsky pointed out in the end his article on it

Now, please, from a consequentialist POV, prove to me why an intelligent, precise, and carefully thought-out use of the Dark Arts wouldn't work for us, and how it would be counterproductive to our cause.

Rationality is something you do, not something you are.

If you use DA instead of rationality, you are not setting any kind of teaching example to the not particularly rational section of mankind; and the already-fairly-rational segment are going to detect what you are doing and be put off by it.

You mean the same way such works as Michael Moore's films or The Story of Stuff are off-putting to those who can detect all the bad faith and the manipulative style they employ (even though I don't know of any instances of actual lying)? Those are still a minority in an advanced stage of knowledge and savviness, and even then they can appreciate the message and agree with it, they just kind of look down on the creators of such works as a little crass and unsubtle. It's because of attitudes like this that progressive movements have such trouble advancing in places like the USA: there's a fairly large demographic of people who are almost instinctively repulsed by intellectual hipsters

Additionally, I am fairly sure there are public debaters who are appreciated for their talent on the debate floor without having to resort to any lies, half-truths or distortions of the truth, but who nevertheless ruthlessly beat their opponents to a pulp, usually because they do have truth on their side, but also often because they're just that good at debate. Others have this reputation but fail to live up to it... only if you are an extremely attentive observatior: the subtle tricks they use completely fly over the heads of 99% of the audience.

Furthermore, there's no reason the halfway-rational/rational-in-progress be put off by dark arts if they are used right, especially if the contrast between the rationalist debater and the other is very stark in terms of truthfulness and in term of shade of grey. Confusing "dark" and "pitch black" is as bad as saying everything is the same tone of grey . This contrast, if stark enough, still allows to teach a much better example to the irrational than what they are used to. At the very least, it doesn't take a freaking saint to expose a crook or an idiot for what they are.

This is a hard one to judge.

Halfway through, I was about to reach for the upvote button for the Defense against the Dark Arts post with an excellent summary of dishonest rhetorical tricks everyone needs to know how to guard against.

Then I was about to reach for the downvote button once it started advocating embracing the Dark Arts and employing any dishonest rhetorical tricks that look in the short term like they might further the author's favorite cause.

But I find myself bookmarking the illustrated version of the list for use next time I want to refer someone to a clear introduction to this stuff. And it doesn't feel quite right to downvote a post that provided me with something worth bookmarking. So I'll abstain from voting on this one.

Every group needs a token "evil" teammate], if only so that Dark methods are at least given consideration rather than rejected out of hand. I think it's a role we should all endorse from time to time, our little inner Slitherin