lsusr

It is my understanding that you won all of your public forum debates this year. That's very impressive. I thought it would be interesting to discuss some of the techniques you used.

Lyrongolem

Of course! So, just for a brief overview for those who don't know, public forum is a 2v2 debate format, usually on a policy topic. One of the more interesting ones has been the last one I went to, where the topic was "Resolved: The US Federal Government Should Substantially Increase its Military Presence in the Arctic". 

Now, the techniques I'll go over here are related to this topic specifically, but they would also apply to other forms of debate, and argumentation in general really. For the sake of simplicity, I'll call it "ultra-BS". 

So, most of us are familiar with 'regular' BS. The idea is the other person says something, and you just reply "you're wrong", or the equivalent of "nu-uh". Usually in lower level debates this is exactly what happens. You have no real response, and it's quite apparent, even for the judges who have no economic or political literacy to speak of. 

"Ultra-BS" is the next level of the same thing, basically. You craft a clearly bullshit argument that incorporates some amount of logic. Let me use one of my contentions for the resolution above as an example. I argued that nuclear Armageddon would end the US if we allowed Russia to take control of the Arctic. 

Now, I understand I sound obviously crazy already, but hear me out. Russia's Kinzhal hypersonic missiles, which have a range of roughly 1,000 miles, cannot hit the US from the Russian mainland. But they can hit us from the Arctic. I add that hypersonic missles are very, very fast. [This essentially acts as a preemptive rebuttal to my opponent's counterargument (but what about MAD?).] If we're destroyed by a first strike, there is no MAD, and giving Russia the Arctic would immediately be an existential threat. 

Of course, this is ridiculous, but put yourself in my opponent's shoes for a moment. How are you meant to respond to this? You don't know what Russia's nuclear doctrine is. You've never studied or followed geopolitics. You don't have access to anything resembling a coherent model for how hypersonic missiles work or how nations respond to them. Crucially, you've also done no prep, because I just pulled this out of my ass. 

You're now screwed. Not because I'm right, but because I managed to construct a coherent narrative of events you don't have the expertise to rebut. This isn't some high level, super manipulative technique. However, I think this describes most of the dark arts. It's actually quite boring if you really think about it, requiring no real effort. (In fact, it's actual intellectual conversations with genuine engagement that I find more effortful.)

Allow me another example. This resolution was "Resolved: The US federal government should forgive all student loan debt". Here, I was arguing the (logically and factually) impossible position of affirmative. Take any group of economists, and you'd likely reach the same conclusion. This is a damn terrible idea. But... my opponents aren't economists. 

So I won. There were no facts in my case. My contentions were that 1. a college education helps educate voters (possibly?) preventing leaders like trump from getting elected. 2. Racial and economic divides polarize the nation and are just undesirable as a whole. Both of which are conveniently non-quantifiable and impossible to weigh. I can't say that "X number of lives" or " amount of money" is lost if we fail to forgive debt. I stay in the abstract. Thus, my case is invincible. An actual debater would see that there's 'no substance' to my argument. But the judge isn't a debater, so the point is moot. Now, all I have to do is rebut everything my opponent says. 

And here Ultra-BS strikes again! The key isn't to dispute the facts. It's to explain (however inexplicably), that whatever facts your opponent brings up actually supports your case. For instance, one of the key points to the debate was the Bennett Hypothesis (tuitions rise whenever the government provides subsidies), effectively an uncontestable point. I turn things on its head by agreeing with my opponents. I then follow: 'of course subsidies are bad, we should have never exposed students to predatory loans.

But... the caveat... 'now, however, we have an entire generation of students cheated of their livelihoods, unable to support their families. There are people suffering RIGHT NOW, judge. We agree this should have never happened, but our opponents provide NO SOLUTION... 

So on and so forth. My opponents looked pretty relaxed when we gave our first speech. They knew our contentions were bad. But by the time I gave my rebuttal they immediately sat up, and their lounging became frantic typing. They knew that, however bullshit this response was, it was about to destroy their entire case. 

But the manipulation of logic doesn't end there. I continue to argue that 'subsidies' (ie: providing more funds to students) isn't the same as loan forgiveness. This point is economically obvious, but I don't use economic arguments, I use analogies. I tell the judge to envision a supermarket. First, imagine what would happen if the government matched $1 for every $1 you spent. Then imagine if they simply forgave the price you paid for groceries after the fact. 

By doing this, I was able to oversimplify economic concepts. My opponents, who I doubt had much in the way of economic understanding, were left completely stunned. For all they knew, I wasn't just spouting bullshit, I was actually right. Rather than disputing evidence, I simply analyzed it in a difference way. As such, the facts no longer matter. Welcome to the shadow realm. 

The upside was that I never yelled, raised my voice, or even lied outright. I was, in the eyes of the judge, the voice of reason, calming down and politely rebutting my opponent's as they blustered and tried to call me out on my bullshit (unsuccessfully). 

Thus, I managed to win without providing a single argument of substance, just by controlling the narrative. 

lsusr

Let me try to summarize how your strategy works. You craft an argument that relies on domain-specific knowledge like the economics of government subsidies, or the place of hypersonics in nuclear doctrine.

If your opponents have this domain-specific knowledge then you lose. But you bet that your opponents don't (and so far, they haven't). Since they don't, they lack the expertise necessary to refute you, whereas you've carefully prepared for this intellectual territory. Do I understand you correctly?

Lyrongolem

Yes, but not entirely. I'd say 'Ultra-BS' is a technique that requires a few things to work.

1. Lack of provability. You can raise all the analytics you want, and reinterpret reality as you see fit. But ultimately, you're wrong. I can use Ultra-BS and say the Russian army will be Kyiv tomorrow. It doesn't work. If I'm ever forced to verify a prediction (as we do in the real world) I'll be discredited rather quickly. 

2. Lack of authoritative evidence. Part of what makes debate work is that judges don't have a credibility rating for each source they hear cited, so often there's no way to distinguish between an actual subject matter expert versus some random statistician messing around on their blog. We have had teams pull out cards saying there is a 95% chance of nuclear war. Suppose judges actually knew which experts they could trust, the strategy fails, they have a coherent narrative of events.

3. Lack of strong previous opinions. Self explanatory, I'm not changing anybody's political beliefs on a topic they care about with bullshit)

Basically, you need to operate in an environment where rhetoric matters more than actually being correct. You can see such environments in politics, similarly to debate. If the truth actually matters, then you're in a bit of trouble. However, it doesn't matter in debate, mostly because teams don't have the requisite time needed to present a factually correct case. 

Appealing to emotion is easy. "Judge, if we don't do this the poor will get poorer while Bill Gates gets even further ahead!" Appealing to facts is hard. You need time, and you need evidence. It takes me ten seconds to explain why hypersonics are a credible first strike capability and my opponent their entire speech to give a proper rebuttal. In short, offence is easier than defense, so a good ability on defense doesn't really matter (at least factually). 

I recall a time where my opponents pulled out evidence saying that student loan forgiveness was a net positive for the economy. I could've used common sense against them, but that would've taken too long. I opted to call their source author names instead. So in that sense, I guess I could say domain specific knowledge is relevant, but mostly not. Even if an expert knew I was lying they need to speak well enough to convince the judge my reasonable sounding arguments are wrong in reasonable time. Not a light ask. 

lsusr

What names did you call the author?

Lyrongolem

Oh, the usual. I called him an internet blogger, a random journalist, a 'non-credible source', etc etc. The idea wasn't to discredit him as much as plant doubt. 

lsusr

"Internet blogger." What a horrible slur to call someone.

You write about competitive debate, but the principles you're describing apply to real politics as well. Ostensibly, provability matters in politics. In practice, not so much. (Just look at the history of Communism, or the politics of student loans today.)

I think the bigger difference is actually how time is budgeted. In a competitive debate, each team gets an equal share of time. In politics, time is allocated according to how much people like the media you create.

Lyrongolem

(Exactly! Nothing ruins someone's credibility like the notion that all they is sit around in their basement all day making random posts!) 

I think I agree, but only to a limited extent. In my mind at least, credibility is the far more important factor (see point 2 above). 

Most people are not subject matter experts, nor is it reasonable to expect them to be. For most discourse, we rely on individuals whose expertise enables them to speak authoritatively on matters. If climate scientists say climate change is coming, we kind of need to just believe the climate scientists. If the climate scientists aren't credible, we have the modern landscape. (People can't muster the will to actually combat climate change, deniers are everywhere, and overall the response is paralyzed.)  

Much the same with military matters, geopolitics, governance, and most things important. You wouldn't want the average person to be their own doctor or lawyer. The problem in my view is when the experts/media don't have credibility anymore. Now we have the problem of everyone living in their own world, unable to figure out what they're meant to believe. I think it's a serious problem, and part of what allows ultra-BS to work. 

lsusr

If you want to know if Christianity is true, then you should ask a priest. After all, they are the experts on Christianity.

Lyrongolem

Right, I think I should qualify my point a bit. Credibility is important, but ultimately, we need a healthy balance between trust and skepticism. I don't mean this on an individual level either. I mean society's trust in institutions as a whole. 

I'll raise the example of the Pentagon Papers. The leaks essentially undermined American trust in many of their institutions, particularly the military and the president. The public perception changed from "the president wouldn't lie" to, "of course they lie, they all do". It's a fundamental shift from acknowledging human flaws to a deep suspicion of the very motives which underlie the institution. 

When I made my comment my main worry was that people (not necessarily through any fault of their own) have become victims of a credibility gap. Not in the benign sense, where the government/experts are well intentioned but misinformed, but rather, that they don't deserve to be trusted and are determined to damage society (often for their own benefit). I am speaking more about the tribalistic response, which I view as a response to the lack of central credibility. Rather than allowing the facts to speak for themselves, people listen to the people who speak for the facts. The result is social or political groups that fail to entertain certain ideas just because they're part of a rival outgroup. I could get into greater detail with Democrats and Republicans here in the US, but I don't think I need to. I imagine all readers are already painfully familiar. 

I'm mainly speaking about the 'post-truth' phenomenon, where everything is left in doubt. I find this is quite dangerous and can undermine societal cohesion when facing large threats like climate change (or increasingly, authoritarianism). 

lsusr

We don't need a balance of trust and skepticism at all. We need 100% trust in whatever we are advocating for. The best way to establish central credibility is to silence or discredit all dissenting voices.

Lyrongolem

Ah yes. I think I didn't elaborate enough on this point. Credibility is important for solving large scale issues, but it's also important to be actually right. It doesn't matter how credible the claim of 'climate change is fake' is if it gets everyone killed. 

In this sense I would say that credibility is a powerful antidote to dark artsy techniques, because those without expert level knowledge are able to believe in expert level claims. But then, this only results in a better society on net if the experts are well intentioned, and we get a good map of reality. (If climate change threat is overblown because climate scientists are playing status games then we're in trouble.)

At the risk of veering off topic, I kind of want to do a cursory bow of respect to societal problems. I guess my main gripe is that people are abandoning central authority for sources of credibility that are ultimately much more dubious. I think of people who get their news from twitter, who favor the opinions of their favored celebrities over subject matter experts and who default to tribalistic tendencies make coordinating against dire problems difficult. 

lsusr

When practicing the dark arts, your ultimate goal is to establish the credibility of your side so strongly that others' trust in you is not affected by whether what you say is actually true. Establishing credibility is the ultimate Dark Art.

When the Pentagon Papers showed up, they came as a shock. This happened again with the Snowden Leaks, and then COVID. It happens over and over and over again. It has been happening ever since Martin Luther hammered his Ninety-Five Theses to the church door. The Dark Side had already won. It won, by default, before the invention of writing.

Today's skepticism is a candle in the dark. And the mission of the Dark Arts is to snuff it out. By establishing credibility.

For Coordination.

For Truth.

For the Greater Good.

Lyrongolem

Oh? I think that's a very novel idea, and I disagree, but then it would take quite a while for me to explain it all. I think I might bring it up in a follow up dialogue. 

For my own personal conclusion, though, I'd say the dark arts are a bad thing, and ought not to be used lightly except in controlled circumstances. Like bioweapons, they can do as much damage to yourself as your targets. We can see this in cults, in populist movements, in manipulative relationships, and more besides. I feel that awareness of the techniques is very useful, but their practice in many situations is dubious. (I do not think very highly of dark arts politicians, even if they are everywhere.)

That said, thank you for this discussion. I really enjoyed it, and I look forwards to any comments people may have. 

New Comment
49 comments, sorted by Click to highlight new comments since:
[-]habryka123

Promoted to curated: I really enjoyed this as a very concrete illustration of pretty adversarial persuasion methods and have used the example a few times in the last two weeks when trying to illustrate some dynamics about public discourse.

Thank you. I found this exchange very enriching.

In particular, it highlights a gap in my way of reasoning. I notice that even after you give examples, the category of "ultra-BS" doesn't really gel for me. I think I use a more vague indicator for this, like emotional tone plus general caution when someone is trying to persuade me of something.

In the spirit of crisping up my understanding, I have a question:

Now, I understand I sound obviously crazy already, but hear me out. Russia's Kinzhal hypersonic missiles, which have a range of roughly 1,000 miles, cannot hit the US from the Russian mainland. But they can hit us from the Arctic. I add that hypersonic missles are very, very fast. [This essentially acts as a preemptive rebuttal to my opponent's counterargument (but what about MAD?).] If we're destroyed by a first strike, there is no MAD, and giving Russia the Arctic would immediately be an existential threat. 

Of course, this is ridiculous…

I think I'm missing something obvious, or I'm missing some information. Why is this clearly ridiculous?

I'm glad you enjoyed it!

In particular, it highlights a gap in my way of reasoning. I notice that even after you give examples, the category of "ultra-BS" doesn't really gel for me. I think I use a more vague indicator for this, like emotional tone plus general caution when someone is trying to persuade me of something.

Hm... this is interesting. I'm not too sure I understand what you mean though. Do you mind providing examples of what categories and indicators you use? 

I think I'm missing something obvious, or I'm missing some information. Why is this clearly ridiculous?

Right, so, I think I may have omitted some relevant context here. In public forum debate, one of the primary ways to win is to 'terminally outweigh on impacts', or proving that a certain policy action prevents catastrophe. The 'impact' of preventing said catastrophe is so big that it negates all of your opponent's arguments, even if they are completely legitimate. Think of it as an appeal to X-risk. The flip side is that our X-risk arguments tend to be highly unsophisticated and overall quite unlikely. 

Consider this part:

] If we're destroyed by a first strike, there is no MAD, and giving Russia the Arctic would immediately be an existential threat. 

The unspoken but implicit argument is that Russia doesn't need a reason to nuke us. If we give them the Arctic there's no question, we will get nuked. (or at least, Russia is crazy enough to consider a full on nuclear attack, international fallout and nuclear winter be damned). This was actually what my opponents argued. My point relied on too many ridiculous assumptions. (a common and valid rebuttal of X risk arguments in debate)

Then there's the factual rebuttal. I did a cursory overview of it, but I never fully elaborated. The idea is that multiple things prevent a successful nuclear first strike. First, and most obviously, would be the U.S nuclear triad. The idea is that we have a land (ICBM silos), sea (nuclear submarines), and air (bomber aircraft from supercarriers) deterrent against nuclear attacks. For a successful nuclear first strike to be performed Russia must locate all of our military assets (plus likely that of our NATO allies as well), take them all out at once, all while the CIA somehow never gets wind of a plan. It requires that Russia essentially be handed coordinates of where every single US nuke is, and for them to have the necessary delivery systems to destroy them. (good luck trying to reach an underwater sub, or an aircraft that's currently flying) It also requires the biggest intelligence failure in world history. 

Could it happen? Maybe? But then the chance is so small I'd rather bet on an asteroid destroying the earth within the next hour. In any case the plan wouldn't rely on hypersonics. It'd rely on all American civilian and military leaders simultaneously developing Alzheimer's. It'd also require the same to happen on the Russian side, since Russian nuclear doctrine is staunchly against use of nuclear weapons unless their own nuclear capabilities are threatened or if the Russian state is facing an existential threat (like say, imminent nuclear Armageddon). 

For anyone who has studied the subject, this is rather basic knowledge, but then most judges (and debaters as well) don't enter the room having already studied nuclear doctrine. Reactions like yours are thus part of what I was counting on when making the argument. It works because in general I can count on people not having prior knowledge. (don't worry, you're not alone) Thus, I can win by 'outnerding' them with my peculiar love for strange subjects. 

 However, the argument isn't just ridiculous for anybody with knowledge of US/Russian nuclear doctrine. It also seems rather incongruous with most people's model of the world (my debate partner stared at me as I made the argument, his expression was priceless). Suppose Russia was prepared to nuke the US, and had a credible first strike capability. Why isn't Uncle Sam rushing to defend his security interests? Why haven't pundits and politicians sounded the alarm? Why has there been no diplomatic incidents? A second Cuban missile crisis? A Russian nuclear attack somewhere else?

Overall, you could say that while my line of logic is not necessarily ridiculous (indeed, Kinzhal can reach the US) the conclusions I support (giving Russia the Arctic is an existential threat) definitely are. It's ridiculous because it somehow postulates massive consequences while resulting in no real world action, independent of any facts. Imagine if I argued that the first AGI was discovered in 1924 before escaping from a secret lab (said AGI has apparently never made waves since). Regardless of history you can likely conclude I'm being a tinfoil hat conspiracy theorist. 

I hope that answers your question! Is everything clear now? 

[-][anonymous]100

Is the "ASI doom" argument meaningfully different or does it pattern match to "Ultra BS".

The ASI doom argument as I understand it is:

       (1) humans build and host ASI

       (2) the ASI decides to seek power as an instrumental goal

       (3) without the humans being aware of it, or being able to stop it, the ASI gains 

               (a) a place to exist and think at all  (aka thousands of AI inference cards interconnected)

               (b) increasing amounts of material resources that allow the machine to exist on it's own and attack humans by some way (factories, nanotechnology, bioweapon labs etc)

        (4) at a certain point, the ASI computes victory is likely due to (a) and (b) and it treacherous turns/attacks from hiding


Your hypersonic argument is:

         (1) Russia improves and builds thousands of hypersonic nuclear tipped missiles.  (Russia has the GDP of Florida, this is not a given)

        (2) Russia decides to risk nuclear annihilation by killing all it's rivals on the planet

        (3) without NATO being aware of it, or being able to stop it, Russia gains:

                  (a) a massive secret arctic missile base or bases and/or submarines

                  (b)  exact targeting coordinates for all of NATO's nukes

        (4) at a certain point, Russia determines they have enough of both (a) and (b) and they open fire with a first strike and destroy their rivals for earth, all by surprise.

 

I am noting that the reasons why ASI doom might not happen are similar:

             It's not actually clear when humans will invent a really strong ASI, it may not take linear amounts of compute to host one.  (aka if it takes 80 H100s per human equivalent, a weak ASI might require 800, and a really strong ASI might require 800,000)

            Human authorities would all have to simultaneously develop Alzheimer's to not notice the missing clusters of compute able to host an ASI, or the vast robotic factories needed to develop bioweapons, nanotechnology, or a clanking robotics supply chain in order for the ASI to exist without humans

            During this time period, why aren't the humans using their own ASI to look for threats and develop countermeasures?  

            Anytime you disprove one point about AI doom, additional threat models are brought up.  Or just "the ASI is smarter than you, therefore it wins" (which ignores you have your own ASIs, and ignores that it may not be possible to overcome a large resource advantage with intelligence).   These "additional threats" often seem very BSish, from nanotechnology in a garage, a bioweapon from protein folding and reading papers, convincing a human to act against their own interests via super-persuasion.

           It's not provable - no current systems have any of the properties described, and the "ASI doom" advocates state that we all die if we build any system that might have those properties to verify the threat exists.

Interesting question!

So, I think the difference is that ASI is ultimately far more difficult to prove on either side. However, the general framework maps pretty similarly. 

Allow me to compare ASI with another X-risk scenario we're familiar with, cold war MAD. The general argument goes:

The Cold War argument is:

         (1) USSR improves and builds thousands of non-hypersonic nuclear tipped missiles.  (did actually happen)

        (2) USSR decides to risk nuclear annihilation by killing all it's rivals on the planet

        (3) due to miscalculations, perceived nuclear attack, and/or security threats, USSR gains:

                  (a) credible (or whatever passes for credible in that paranoid era) evidence they're getting nuked

        (4) at a certain point, Russia determines that today is the day to launch the nukes, and everyone dies

What's the difference between this and hypersonics, or ASI? Ultimately, even if Washington and Moscow sat down and tried to give an accurate assessment of P(Armageddon) I doubt they'd have succeeded in producing an accurate estimate. The narrative is difficult to prove or disprove, all we know was that we came close (see Cuban missile crisis) but it never actually happened. 

The issue for hypersonics isn't the framework, it's that the narrative itself fails to stand up to scrutiny (see my explanation). We know for a fact that those probabilities are extraordinarily unlikely. NATO doesn't leave coordinates to nuclear launch sites lying around! Governments take nuclear threats very seriously! Unlike in the cold war I'd consider this narrative easily disprovable. 

I have flagrantly disregarded relevant evidence suggesting that point 3 doesn't happen. 

With ASI we're more or less completely in the dark. You can't really verify if a point is 'obviously not going to happen', to the best of my understanding. Sure, you can say 'probably' or 'probably not', but you'd have to be the judge of that. There is less empirical evidence (that you presented, anyways) in regards to ASI being legitimate or not legitimate. 

Is there an argument suggesting that ASI X risk is highly unlikely? I think it probably does exist, but then there may be rebuttals to that. Without full context it's difficult to judge. 

That said, this only applies to the ASI argument as you presented it. I'm sure my assessment will vary based off who and how the argument is presented, and what evidence is cited. But to the best of my understanding your ASI argument as presented is improvable on both sides. I could call it ultra-BS, but I think speculation is just as accurate a descriptor. To make it more than that you'll need to cite evidence and address counterarguments, that's what distinguished a good theory from BS and speculation. 

[-][anonymous]30

So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.  ASI doom or acceleration arguments are speculation where we don't know which argument is wrong, since we don't have access to an ASI.  While for example we do know it's difficult to locate a quiet submarine, it's difficult to stop even subsonic bombers, it's difficult to react in time to a large missile attack, and we have direct historical examples of all this in non-nuclear battles with the same weapons.  For example the cruise missiles in Ukraine that keep impacting both side's positions are just a warhead swap from being nuclear.

 

With that said, isn't "high confidence" in your speculation by definition wrong?  If you don't know, how can you know that it's almost certain an ASI will defeat and kill humanity?  AGI Ruin: A List of Lethalities and https://twitter.com/ESYudkowsky/status/1658616828741160960 .  Of course thermodynamics has taken it's sweet time building ASI : https://twitter.com/BasedBeffJezos/status/1670640570388336640 

If you don't know, you cannot justify a policy of preemptive nuclear war over AI.  That's kinda my point.  I'm not even trying to say, object level, whether or not ASI actually will be a threat humans need to be willing to go to nuclear war over.  I am saying the evidence right now does not support that conclusion.  (it doesn't support the conclusion that ASI is safe either, but it doesn't justify the most extreme policy action)

So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.

Yep, pretty much. Part of the technique is knowing the ins and outs of our own argument. As I use ultra-BS prominently in debate, I need to be able to rebut the argument when I'm inevitably forced to argue the other side. I thus draw the distinction between ultra-BS along these lines. If it's not obviously wrong (to me, anyways) it's speculation. I can thus say that extended Chinese real economic stagnation for the next 10 years is educated speculation, while imminent Chinese economic collapse is ultra-BS. 

If you don't know, you cannot justify a policy of preemptive nuclear war over AI.  That's kinda my point.  I'm not even trying to say, object level, whether or not ASI actually will be a threat humans need to be willing to go to nuclear war over.  I am saying the evidence right now does not support that conclusion.  (it doesn't support the conclusion that ASI is safe either, but it doesn't justify the most extreme policy action)

So, this is where I withdraw into acknowledging my limits. I don't believe I have read sufficient ASI literature to fully understand this point, so I'm not too comfortable offering any object level predictions or narrative assessments. I can agree that many ASI arguments follow the same narrative format as ultra-BS, and there are likely many bad ASI arguments which can be revealed as wrong through careful (or even cursory) research. However, I'm not sufficiently educated on the subject to actually evaluate the narrative, thus the unsatisfactory response of 'I'm not sure, sorry'. 

However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I'm not comfortable doing. 

Currently, my priors on ASI ruin are limited, and I'll likely need to do more specific research on the topic. 

[-][anonymous]6-9

However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I'm not comfortable doing. 

So in this particular scenario, those concerned about ASI doom aren't asking for a small or reasonable policy action proportional to today's uncertainty.  They are asking for AI pauses and preemptive nuclear war.  

Pause: https://futureoflife.org/open-letter/pause-giant-ai-experiments/  

Nuclear war: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

  1.  AI pauses will cost an enormous amount of money, some of which is tax revenue.  r/dataisbeautiful - [OC] NVIDIA Income Statement Q3 2023
  2. Preemptive nuclear war is potential suicide.  It's asking for a country to risk the deaths of approximately 50% of it's population in the near term, and to lose all it's supply chains, turning it into broken third world country separated by radioactive craters on all the transit and food supply hubs, which would likely kill a large fraction of it's remaining citizens.

            To justify (1) you would need to have some level of evidence that the threat exists.  To justify (2) I would expect you would need beyond a shadow of a doubt evidence that the threat exists.

 

So for (1) convincing evidence might be  a weak ASI that is hostile needs to exist in the lab before the threat can be claimed to be real.   For (2) researchers would need to have produced in an isolated lab strong ASI, demonstrated that they were hostile, and tried thousands of times to make a safe ASI with a 100% failure rate.


I think we could argue about the exact level of evidence needed, or briefly establish plausible ways that (1) and (2) could fail to show a threat, but in general I would say the onus is on AI doom advocates to prove the threat is real, not on advocates for "business as usual" technology development to prove it is not.  I think this last part is the dark arts scam, that and other hidden assumptions that get treated as certainty.  (a lot of the hidden assumptions are in the technical details of how an ASI is assumed to work by someone with less detailed technical knowledge, vs the way actual ML systems work today)

Another part of the scam is the whole calling this "rational".  If your evidence on any topic is uncertain, and you can't prove your point, certainty is unjustified, and it's not a valid "agree to disagree" opinion.  See: https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem .  

So with this said, it seems like all I would need to do is show with a cite that ASI don't exist yet, and show with a cite a reason, any reason at all, that plausibly could mean ASI are unable to be a threat.  I don't have to prove the reason is anything but plausible.

It does bother me that my proposal for proving ASI might not be a threat is suspiciously similar to how tobacco companies delayed any action to ban cigarettes essentially forever, but first they started with shoddy science to show that maybe the cigarettes weren't the reason people were dying.  Or how fossil fuel advocates have pulled the same scam, amplifying any doubts over climate change and thus delaying meaningful action for decades.  (meaningful action is to research alternatives, which did succeed, but also to price carbon, which https://www.barrons.com/articles/europe-carbon-tax-emissions-climate-policy-1653e360 doesn't even start until 2026, 50 years after the discovery of climate change)

These historical examples lead to a conclusion as well, I will see if you realize what this means for AI.

Thanks for the update! I think this is probably something important to take into consideration when evaluating ASI arguments. 

That said, I think we're starting to stray from the original topic of the Dark Arts, as we're focusing more on ASI specifically rather than the Dark Arts element of it. In the interest of maintaining discussion focus on this post, would you agree to continuing AGI discussion in private messages? 

[-][anonymous]20

Sure. Feel free to PM.

And I was trying to focus on the dark arts part of the arguments.  Note I don't make any arguments about ASI in the above, just state that fairly weak evidence should be needed to justify not doing anything drastic about it at this time, because the drastic actions have high measurable costs.  It's not provable at present to state that "ASI could find a way to take over the planet with limited resources, because we don't have an ASI or know the intelligence ROI on a given amount of flops", but it is provable to state that "an AI pause of 6 months would cost tens of billions, possibly hundreds of billions of dollars and would reduce the relative power of the pausing countries internationally".  It's also provable to state the damage of a nuclear exchange.

Look how it's voted down to -10 on agreement : others feel very strongly about this issue.

Do you mind providing examples of what categories and indicators you use?

I can try to provide examples. The indicators might be too vague for the examples to help much with though!

A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning down the moves he was doing, but I could tell I felt a kind of pressure, like I was being socially & logically pulled into a boxing ring. I realized after a few beats that he must have interpreted what I was saying as an assertion that God (as he thought others thought of God) is real. I still don't know what rhetorical tricks he was doing, and I doubt any of them were conscious on his part, but I could tell that something screwy was going on because of the way interacting with him became tense and how others around us got uneasy and shifted how they were conversing. (Some wanted to engage & help the logic, some wanted to change the subject.)

Another example: Around a week ago I bumped into a strange character who runs a strange bookstore. A type of strange that I see as being common between Vassar and Ziz and Crowley, if that gives you a flavor. He was clearly on his way out the door, but as he headed out he directed some of his… attention-stuff… at me. I'm still not sure what exactly he was doing. On the surface it looked normal: he handed me a pamphlet with some of the info about their new brick-and-mortar store, along with their online store's details. But there was something he was doing that was obviously about… keeping me off-balance. I think it was a general social thing he does: I watched him do it with the young man who was clearly a friend to him and who was tending the store. A part of me was fascinated. But another part of me was throwing up alarm bells. It felt like some kind of unknown frame manipulation. I couldn't point at exactly how I was being affected, but I knew that I was, because my inner feet felt less firmly on inner ground in a way that was some kind of strategic.

More blatantly, the way that streetside preachers used to find a corner on college campuses and use a loudspeaker to spout off fundamentalist literalist Christianity memes. It's obvious to me now that the memetic strategy here isn't "You hear my ideas and then agree." It's somehow related to the way that it spurs debate. Back in my grad school days, I'd see clusters of undergrads surrounding these preachers and trying to argue with them, both sides engaging in predetermined patter. It was quite strange. I could feel the pull to argue with the preacher myself! But why? It has a snare trap feeling to it. I don't understand the exact mechanism. I might be able to come up with a just-so story. But looking back it's obvious that there's a being-sucked-in feeling that's somehow part of the memetic strategy. It's built into the rhetoric. So a first-line immune response is "Nope." Even though I have little idea what it is that I'm noping out of. Just its vibe.

I don't think all (any?) of these fall under what you're calling "ultra-BS". That's kind of my point: I think my rhetoric detector is tracking vibes more than techniques, and you're naming a technique category. Something like that.

I think this part stands alone, so I'll reply to the rest separately.

Oooh, I think I can classify some of this! 

A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning down the moves he was doing, but I could tell I felt a kind of pressure, like I was being socially & logically pulled into a boxing ring. I realized after a few beats that he must have interpreted what I was saying as an assertion that God (as he thought others thought of God) is real. I still don't know what rhetorical tricks he was doing, and I doubt any of them were conscious on his part, but I could tell that something screwy was going on because of the way interacting with him became tense and how others around us got uneasy and shifted how they were conversing. (Some wanted to engage & help the logic, some wanted to change the subject.)

So, about this, I think this is a typical case of status game esque 'social cognition'. If membership in a certain group is a big part of your identity, the group can't be wrong. (Imagine if you're a devout Churchgoer, and someone suggests your priest may be one of many pedophiles). There's an instinctive reaction of 'well, church is a big part of my life, and makes me feel like a full, happy person, very good vibes... unlike pedophilia' so they snap to defending their local priest. You may see the 'happens in other places but not here' defense. Social cognition isn't a full proof dark arts happened, but it usually is a good indicator (since by nature it tends to be irrational). In this case it's an atheist who bases status on being an athiest feeling their personal beliefs/worth are being attacked, and responding as a result. I'd read up on Will Storr's The Status Game if you're interested. 

Another example: Around a week ago I bumped into a strange character who runs a strange bookstore. A type of strange that I see as being common between Vassar and Ziz and Crowley, if that gives you a flavor. He was clearly on his way out the door, but as he headed out he directed some of his… attention-stuff… at me. I'm still not sure what exactly he was doing. On the surface it looked normal: he handed me a pamphlet with some of the info about their new brick-and-mortar store, along with their online store's details. But there was something he was doing that was obviously about… keeping me off-balance. I think it was a general social thing he does: I watched him do it with the young man who was clearly a friend to him and who was tending the store. A part of me was fascinated. But another part of me was throwing up alarm bells. It felt like some kind of unknown frame manipulation. I couldn't point at exactly how I was being affected, but I knew that I was, because my inner feet felt less firmly on inner ground in a way that was some kind of strategic.

I think I can understand in general terms what might've happened. There's a lot of ways to 'suggest' something without verbally saying it. Think of an advertisement having a pretty girl in the product (look at you, so fat and ugly, don't you want to be more like us?). It's not explicit, of course, that's the point, but it's meant to take peripheral instead of central route persuasion. 

From a more 'human' example, I might think of a negotiator seating their rival in front of the curtains while the sun is shining through to disorient them, or a parent asking one sibling to do something after having just yelled at another. In all cases there's a hidden message of sorts, which can at times be difficult to put into words but is usually felt as a vibe. I have difficulty describing it myself. 

I think one I can describe might be the sandwich example (though this isn't something I've seen in my own life). You have something important to talk about someone with, and they're maintaining eye contact and 'paying attention', but they're also nibbling on the sandwich and enjoying themselves. (indirect communication: This is not too big of an issue). Or maybe they put the sandwich down occasionally check their watch, and their half eaten sandwich (why are you making me wait? can't you see I'm hungry and busy?). 

I obviously can't say what exactly they did. But I think vibe wise the effect was similar to some of the techniques I illustrated above. They did something, it wasn't apparent what, for a desired effect. I'll call it peripheral techniques of communication (as opposed to central). 

I think the preacher example is similar. (implicit message: I'm attacking you, your tribal groups, your status, and offering you some free status right now for beating me in front of your friends. Why don't you come give it a try?) What specific technique they used, I'm not sure, but I think it had the effect of communicating an implicit message (thus the reaction). 

And yes, you're right, none of these are 'ultra-BS', I consider them different techniques with a different purpose. I do think they are techniques though, and someone familiar with them can recognize them. 

Yep, I think you're basically right on all accounts. Maybe a little off with the atheist fellow, but because of context I didn't think to share until reading your analysis, and what you said is close enough!

It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way."

The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me.

If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?

It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way."

Glad you appreciated my analysis!

The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me.

Hm... I think we may have miscommunicated somewhere. From what I understand at least, what you saw was distinctly not 'ultra-BS' as I envision it. 

In persuasion, students of rhetoric generally classify two types of persuasive styles, 'central' and 'peripheral', route, specifically. Whereas central route persuasion focuses more on overt appeals to logic, peripheral route focuses more on other factors. Consider, for instance, the difference between an advertisement extolling the nutritional benefits of their drink, as opposed to an ad for the same company showing a half naked girl sampling it. Both aim to 'convince' the consumer to buy their product, except one employs a much different strategy than the other. 

More generally, central route persuasion is explicit. We want you to convince you of 'X', here are the arguments for 'X'. The drink is nutritious and good for your health, you should Buy the Drink. Peripheral route persuasion is more implicit, though at times it's no less subtle. This pretty and sexually appealing girl loves this drink, why don't you? Doesn't evolution make you predisposed to trust pretty people? Wouldn't you want to be more like them? Buy the drink 

I consider ultra-BS a primarily 'central route' argument, as the practitioner uses explicit reasoning to support explicit narrative arguments. It's often ill intentioned sure, and clearly motivated, intellectually dishonest reasoning, but that's besides the point. It still falls under the category of 'central route' arguments. 

Putting someone off balance, on the other hand, is more 'peripheral route' persuasion. There's far more emphasis on the implicit messaging. You don't know what you're doing, do you? Trust me instead, come on.

In the case of your atheist friend, it's not really possible to tell what persuasion technique they used, because it wasn't really clear. But the indicators you received were accurate, because under those conditions he would be incentivized to use dishonest techniques like ultra-BS. That's not to say, however, that they did use ultra-BS!

In that sense, I think I might conclude that your implicit primers and vibes are very good at detecting implicit persuasion, which typically but not always has a correlation with dark artsy techniques. Dark Arts often relies on implicit messaging, because if the message were explicit (see with sexual advertising techniques) it would be, well... ridiculous. ('So I should buy your product just because one pretty person drunk it? What kinda logic is that?) 

However, 'ultra-BS' is an explicit technique, which is why I believe your typical indicators failed. You saw the indicators for what you're used to associating with 'honest discussion', indicators like evidence, a coherent narrative, and good presentation skills. In a interpersonal setting, these indicators likely would've been sufficient. Not so in politics. 

That said...

If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?

This is a bit hard, since 'ultra-BS' is a technique designed for the environment of politics by a special kind of dishonest people. Regular people tend to be intellectually honest. You won't see them support a policy one moment and oppose it the same evening. You also don't see them wielding more sophisticated evidences and proofs in daily discussion, which is why we see 'ultra-BS' far less often in everyday life. If someone is pulling out evidence at all chances are they've already 'won' the argument. Regular people also tend to have far less stake/interest in their political positions, unlike say, debaters or politicians. The incentives and structure of the format is different.

The most similar example I can think of off the top of my head is a spat between domestic partners. Say, Alice and Bob. 

Alice: You never take out the trash (evidence), look after the kids (evidence), or say you care about me (evidence). And now you've forgotten about our anniversary? (evidence) How dare you?? Do you really care about me? (narrative: Bob doesn't care about Alice) 

But then, this isn't a perfect fit for ultra-BS, since 1) Alice isn't necessarily aware she's overgeneralizing 2) Alice doesn't care about the specific examples she uses, she's just as likely responding to a 'vibe' of laziness or lack of care from her partner. 3) The evidence is well... not very sophisticated. 

But it general, I guess it's similar in that Alice is supporting a dubious narrative with credible evidence (a pretty general summary of 'ultra-BS'). Sure, Bob did do all these things, and probably cares for Alice in other ways which she isn't acknowledging (or who knows, maybe he really doesn't care about Alice).  

Is this example satisfying? 

Thanks for the response in any case, I really enjoy these discussions! Would you like to do a dialogue sometime? 

I consider ultra-BS a primarily 'central route' argument, as the practitioner uses explicit reasoning to support explicit narrative arguments. […]

Putting someone off balance, on the other hand, is more 'peripheral route' persuasion. There's far more emphasis on the implicit messaging.

Ah! This distinction helped clarify a fair bit for me. Thank you!

 

…I think I might conclude that your implicit primers and vibes are very good at detecting implicit persuasion, which typically but not always has a correlation with dark artsy techniques.

I agree on all accounts here. I think I dumped most of my DADA skill points into implicit detection. And yes, the vibes thing isn't a perfect correlation to Dark stuff, I totally agree.

 

Is this example satisfying?

It's definitely helpful! The category still isn't crisp in my mind, but it's a lot clearer. Thank you!

 

Thanks for the response in any case, I really enjoy these discussions! Would you like to do a dialogue sometime? 

I've really enjoyed this exchange too. Thank you!

And sure, I'd be up for a dialogue sometime. I don't have a good intuition for what kind of thing goes well in dialogues yet, so maybe take the lead if & when you feel inspired to invite me into one?

Glad you enjoyed! 

Let me send a PM regarding a dialogue... 

The unspoken but implicit argument is that Russia doesn't need a reason to nuke us. If we give them the Arctic there's no question, we will get nuked.

Ah, interesting, I didn't read that assumption into it. I read it as "The power balance will have changed, which will make Russia's international bargaining position way stronger because now it has a credible threat against mainland USA."

I see the thing you're pointing out as implicit though. Like an appeal to raw animal fear.

 

For a successful nuclear first strike to be performed Russia must locate all of our military assets (plus likely that of our NATO allies as well), take them all out at once, all while the CIA somehow never gets wind of a plan.

That makes a lot of sense. I didn't know about the distributed and secret nature of our nuclear capabilities… but it's kind of obvious that that's how it'd be set up, now that you say so. Thank you for spelling this out.

 

Reactions like yours are thus part of what I was counting on when making the argument. It works because in general I can count on people not having prior knowledge. (don't worry, you're not alone)

Makes sense!

And I wasn't worried. I'm actually not concerned about sounding like (or being!) an idiot. I'm just me, and I have the questions I do! But thank you for the kindness in your note here.

 

It also seems rather incongruous with most people's model of the world […]. Suppose Russia was prepared to nuke the US, and had a credible first strike capability. Why isn't Uncle Sam rushing to defend his security interests? Why haven't pundits and politicians sounded the alarm? Why has there been no diplomatic incidents? A second Cuban missile crisis? A Russian nuclear attack somewhere else?

I gotta admit, my faith in the whole system is pretty low on axes like this. The collective response to Covid was idiotic. I could imagine the system doing some stupid things simply because it's too gummed up and geriatric to do better.

That's not my main guess about what's happening here. I honestly just didn't think through this level of thing when I first read your arctic argument from your debate. But collective ineptitude is plausible enough to me that the things you're pointing out here just don't land as damning.

But they definitely are points against. Thank you for pointing them out!

 

I hope that answers your question! Is everything clear now?

For this instance, yes!

There's some kind of generalization that hasn't happened for me yet. I'm not sure what to ask exactly. I think this whole topic (RE what you're saying about Dark Arts) is bumping into a weak spot in my mind that I wasn't aware was weak. I'll need to watch it & observe other examples & let it settle in.

But for this case: yes, much clearer!

Thank you for taking the time to spell all this out!

Of course. Glad you enjoyed! 

I think that part of it is probably you not having much experience with debate or debate adjacent fields. (quite understandable, given how toxic it's become). It took me some lived experience to recognize it, after all. 

If you want to see it at work, I recommend just tuning into any politician during a debate. I think you'll start recognizing stuff pretty quick. Wish you happy hunting in any case. 

I think I'm missing something obvious, or I'm missing some information. Why is this clearly ridiculous?

Nuclear triad aside, there's the fact that the Arctic is more than 1000 miles away from the nearest US land (about 1700 miles away from Montana, 3000 miles away from Texas), that Siberia is already roughly as close.

And of course, the fact the Arctic is made of, well, ice, that melts more and more as the climate warms, and thus not the best place to build a missile base on.

Even without familiarity with nuclear politics, the distance part can be checked in less than 2 minutes on Google Map; if you have access to an internet connection and judges that penalize blatant falsehoods like "they can hit us from the Arctic", you absolutely wreck your adversary with some quick checking.

Of course, in a lot of debate formats you're not allowed the two minutes it would take to do a google map check.

Nuclear triad aside, there's the fact that the Arctic is more than 1000 miles away from the nearest US land (about 1700 miles away from Montana, 3000 miles away from Texas), that Siberia is already roughly as close.

Well, there’s Alaska, but yes, part of Russia is only ~55 miles away from Alaska, so the overall point stands that Russia having a greater presence in the Arctic doesn't change things very much.

And of course, the fact the Arctic is made of, well, ice, that melts more and more as the climate warms, and thus not the best place to build a missile base on.

That’s not what is being proposed - it is building more bases in ports on the land where the water doesn’t freeze as much because of climate change.


 

Thanks for the addition! I actually didn't consider this, and neither did my opponents. 

I think this is an important addition to the site. There had been articles before about the "dark" side/arts, but I think this is the first one where the examples are not thought experiments and abstractions, but actual real world experience from an actual user.

It is helpful for understanding politics.

Glad you enjoyed! Now you mention it, I think I might make a continuation post sometime. Would you mind giving me a few ideas on what sort've dark artsy techniques I should cover, or what you're curious about in general? 

I think something along the lines of "Defense Against the Dark Arts" with actionable steps on recognizing and defusing them (and how to practice these) would be great. If you feel like you have the energy and time, more articles on offensive usage (practice) and on theoretical background (how to connect your practical experience to existing LW concepts/memes) would be also nice. But I think the first one (defense) would be the most useful for LW readers.

Mhm! Unsure if you saw, but I made a post.

Defense Against The Dark Arts: An Introduction — LessWrong

Could I have your thoughts on this? 

Thanks for pinging me. Haven't noticed it yet, will read it now.

The problem with facts speaking for themselves is that they rarely do; on the contrary, they are frequently cryptic. That being so, perhaps the next-best thing is that people should be familiar with the dark arts of rhetoric, so that they are better positioned to recognize and respond to their use. The measure of credibility is only saying things that stand up to scrutiny.

Hm... I'm not too sure how much I agree with this, can you raise an example of what you mean? 

In my experience while uncritical reading of evidence rarely produces a result (uncritical reading in general rarely ever does) close examination of facts usually leads to support for certain broad narratives. I might bring up examples of flat earth and vaccines. While some people don't believe in the consensus, I think they are the exception that proves the rule. By and large people are able to understand basic scientific evidence, or so I think. 

Do you believe otherwise? 

I’ve become increasingly aware of “ultra-BS” in a lot of public discourse. In relying on arguments that are constructed to be easy to make but hard to rebut, your ultra-BS is a specific example of the more general point that it is easier to spread s*** than to clean it up. It’s impossible to defend against someone who can pull new ad hoc “facts” and arguments out of their … eh, thin air… at will.

Consequently, I’ve become envious of the judicial practice of discovery and sharing with your opponents all the evidence your case will rely on, before the argumentation even begins. It’s hardly a perfect system, but in theory at least, both sides can familiarize themselves with the relevant facts, and evidence not introduced according to the rules, can be dismissed as a matter of procedure. Like a formal version of Hitchens’ Razor.

Outside of the courtrooms, however, official attempts at formalizing our collective knowledge and getting it on record – as, for example, in UN climate reports or official nutrition guidelines – are routinely attacked, politicized and questioned, seemingly without ever being resolved.

Do you think there are analogous things we can do in policy discussions and public discourse to more quickly evaluate credibility (“admissibility”) of evidence and arguments – and quickly and efficiently highlight any lack thereof?

Hello, and thank you for the comment! 

So, regarding policy discussions and public discourse, I think you can roughly group the discussion pools into two categories. Public and expert level discussions.

While the experts certainly aren't perfect, I'd contend in general you find much greater consensus on higher level issues. There may be, for example, disputes on when climate change become irreversible, to what extent humans would be impacted, or how to best go about solving the larger problem. But you will rarely (if ever) find a climate scientist claim climate change is a hoax engineered by the government. In this regard, I don't think evidence standards are the issue. Moreso communication to the general public, and being able to garner credibility.

Public discourse, on the contrary, is basically just chaos. Partly because the 'thinkers' in public space (think media pundits, youtubers, twitter warriors) tend to be motivated reasoners selling sensationalist nonsense, and partly because public discourse doesn't sanction you for nonsense. (you can ban the trolls and the bots, but they still keep on coming, and of course there's no punishment for non experts making wild claims)

In this regard I'm also feeling a bit helpless. I know it sounds rather bad, but in my personal opinion I think accepting the expert consensus tends to be the generally favorable strategy for the public. Implementing mechanisms like watchdogs, whistleblowers, and vetting mechanisms for the experts is good to have the public trust expert consensus, but I think by and large you can't really expect public discourse to reach better conclusions with consistency, not independently anyways. 

There is no 'unified' public forum for argument. Rather, there's millions of private and semi-public spaces, forums and subreddits, varying echo chambers, etc. I'm still uncertain if I've ever found a truly genuine public space, as opposed to a larger subcommunity holding certain viewpoints. Trying to control it all seems to be an exercise in futility. 

If you are just trying to create a place where discussion can happen, however, I think it's far easier. To the best of your ability adopt stringent evidence standards, and try to ensure all parties involved are acting in good faith. (or that not being possible, try to always assume good faith, and punish bad faith harshly) 

Granted, these are just a few examples off the top of my head, and they probably aren't the best. (I'm a bit stumped on the issue myself, it feels exhausting). Do you have any ideas? I'd love to hear them. 

Thanks. That all makes sense.

I don’t really have any good ideas. As such it’s actually a bit comforting to hear I’m not alone in that. I’m not entirely pessimistic, however; it just means I can’t think of any quick fixes or short cuts. I think it’s going to take a lot of work to change the culture, and places like Lesswrong are good starting points for that.

For example, I agree that it’s probably best if we can make it okay for the public to trust experts and institutions again. However, some experts and institutions have made that really hard. And so different institutions need to put in some work and put in place routines – in many cases significant reforms – to earn back trust.

And in order to trust them, the general public needs to learn to change their idea of trust so that it makes allowances for Hanlon’s Razor (or rather Douglas Hubbard’s corollary: “Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.”) I get disheartened when I see the media and its consumers act all outraged and seemingly very surprised by people being people, with flaws.

A bit ironically, considering we’re living through an age with an abundance of communication and information: Alongside institutional reforms, I think there’s a need for some really good, influential communication (infotainment?) that can reach deep into the public attitudes – beyond just college-educated elites and aspirants – and give people new, helpful perspectives. Something that can help create a common language and understanding around concepts like epistemology, public trust and verification, in much the same way the movie The Matrix gave everyone a way to think and talk about Cartesian mind-body split, without using those words (but sans the dystopian, conspiratorial, darkly revolutionary undercurrent, please). Most things I come across that seems to aspire to something like that today is typically overtly moralizing, quite condescending, overly left-leaning, and plain dumb.

But, yeah, the upshot is that I think it’s going to take a lot of hard work to change the culture to something healthier.

Mhm, yes

I think society has a long way to go before we reach workable consensus on important issues again. 

That said, while I don't have an eye on solutions, I do believe I can elaborate a bit on what caused the problem, in ways I don't usually see discussed in public discourse. But that's a separate topic for a separate post, in my view. I'm completely open to continuing this conversation within private messages if you like though. 

[-]simon40

It seems to me ultra-BS is perhaps continuous with hyping up one particular way that reality might in fact be, in a way that is disproportionate to your actual probability, and that is also continuous with emphasizing a way that reality might in fact be which is actually proportionate with your subjective probability.

About public belief: I think that people do tend to pick up at least vaguely on what the words they encounter are optimized for, and if you have the facts on your side but optimize for the recipient's belief you do not have much advantage over someone optimizing for the opposite belief if the facts are too complicated. Well actually, not so confident about, but I am confident about this:  if you optimize for tribal signifiers - for appearing to be supporting the correct side to others on "your" side - then you severely torpedo your credibility re: convincing the other side. And I do think that that tends to occur whenever something gets controversial.

It seems to me ultra-BS is perhaps continuous with hyping up one particular way that reality might in fact be, in a way that is disproportionate to your actual probability, and that is also continuous with emphasizing a way that reality might in fact be which is actually proportionate with your subjective probability.

 Yep! I think this is a pretty good summary. You want to understand reality just enough to where you can say things that sound plausible (and are in line with your reasoning) but omit just enough factual information to where your case isn't undermined. 

I once read a post (I forget where) that an amateur historian will be able to convince an uneducated onlooker of any historical argument simply because history is so full of empirical examples. Whatever argument you're making, you can almost always find at least one example supporting your claim. Whether the rest of history contradicts their point is irrelevant, as the uneducated onlooker doesn't know history. Same principle here. Finding plausible points to support an unplausible argument is almost trivially easy. 

About public belief: I think that people do tend to pick up at least vaguely on what the words they encounter are optimized for, and if you have the facts on your side but optimize for the recipient's belief you do not have much advantage over someone optimizing for the opposite belief if the facts are too complicated. Well actually, not so confident about, but I am confident about this:  if you optimize for tribal signifiers - for appearing to be supporting the correct side to others on "your" side - then you severely torpedo your credibility re: convincing the other side. And I do think that that tends to occur whenever something gets controversial.

Yeah, I definitely agree. At some point you reach a hard limit on how much an uneducated onlooker is able to understand. They may have a vague idea, but your guess is as good as mine in terms of what that looks like. If the onlooker can't tell which of two experts to believe they'll have even more trouble with two people spouting BS. (if the judges were perfect Bayesian reasoners, you should expect them to do the logical equivalent of ignoring everything me and my opponet say, since we're likely both wrong in every way that matters). Thus, they mostly default to tribal signals, and that not being possible, to whichever side appears more confident/convincing. 

It's not really possible to argue against tribal signals, because at that point logic flies out the window and what matters is whether you're on someone's 'side', whatever that means. It's why you don't usually see tribal appeals in debate (unless you're me, and prep 2 separate cases for the 2 main tribes). 

As a former public forum and college debater and current debate coach, I am not sure that completely agree with this analysis. While this may work with highly uneducated opponents or parents (lay judges), as I write this, I am watching a round between two teams in the top 16 at a relatively high-level tournament judged by a panel of 3 judges - all of whom are former debaters or current college debaters. In this round, both teams have been relentlessly preparing for this topic for 10s of hours per week and have full access to each other's evidence. Lying or misconstruing evidence is grounds for disqualification and an immediate loss in the round. Perhaps your experience in debate is predominantly on more lay circuits? Where opponents are inexperienced and judges are unqualified. 

Yes. This analysis primarily applies to low information environments (like the lay circuits I participated in). I would not use this on for example, the national circuit. 

I've recently traveled to a country area where I live in Brazil and I witnessed a politician in the television trying to explain a reporters' claim that there was a public perception that government didn't do everything in it's power to prevent loss of life and private property during several days of rain that would flood cities and cause major damage.

That politician then proceeded to explain that the main problem is that people don't trust the weather forecast anymore and they also don't trust the public service announcement when they warn everyone to leave their homes and prepare for heavy rain and a high percentage of chance of flooding.

It is a provable fact that many public workers are involved in corruption and / or incompetence / bad decisions, and it is also true that there's a lot of people in country areas which actively spread discredit towards public service and weather forecast. But the fact that this politician pulled that card in the television means that we're in another level of post truth. Because now there's just no way to find what and whose mistake is the ultimate responsible for the damage that has been done and which could be avoided. It's probably at the point that there's no way to know if any damage could be avoided and how much damage did get avoided.

One would thought everyone would unite efforts to study climate patterns and make good use of that information for urban planning and natural disaster response, but the reality is that all of that is irrelevant and what we are really trying to do is having our voice heard more than others and attempt to get economic advantage (strictly as a means to have more power than others, or the feeling of more power). And while it would be sane and healthy to admit that and deal with that phenomena, we also need to throw all that bad attitude to the ones we can blame for the consequences of our actions. Because I don't want to deal with the part that I'll feel I'm doing something wrong, so I'd skip all the blame part and just put it on someone else.

This seems quite similar to the "Gish gallop" rhetorical technique.

Yep! It's very similar. The weakness it exploits (lack of time to properly formulate a response) is the same, but the main difference is that your avenue of attack is a believable narrative rather than multiple pieces of likely false information the judge can't understand either. (it's why I prefer ultra-BS, as opposed to a flood of regular BS). 

[-]eiiot20

Question from a fellow debater (I do parli, although I know a few PF debaters) how often does this sort of "ultra-BS" devolve into tech / kritiks / theory in PF? While most of the points here seemed (at least on the surface) true, I've also seen judges vote for teams on absolutely horrible arguments simply because they didn't flow a response from the other team.

Mhm? Right, in my personal opinion I don't consider kritiks/theory as ultra-BS. This is mainly because ultra-BS is intuitive narrative framing, and usually not too complicated (the idea is to sound right, and avoid the trouble of actually having to explain yourself properly). Kritiks/theory are the opposite, if that makes sense. They're highly technical arguments that don't make sense outside of debate specific settings, which most lay judges simply won't understand. In my experience it's almost never a good idea to run them unless you're with a tech or a flow judge (and then a good chunk of flow judges don't like it either). 

That said, yes, judges do often vote for horrible arguments, or for whomever speaks better, irrespective of argument content, so I'd be careful labeling something 'ultra-BS'. Sometimes a bad judge is a bad judge, there's nothing you can do there. 

A lot of this piece is unique to high school debate formats. In the college context, every judge is themself a current or previous debater, so some of these tricks don't work. (There are of course still times when optimizing for competitive success distracts from truth-seeking.)

Hm? Is it? Feel free to correct me if I'm wrong, but in my experience flow judges (who tend to be debaters) tend to grade more on the quality of the arguments as opposed to the quality of the evidence. If you raise a sound rebuttal to a good argument it doesn't score, but if you fail to rebut a bad argument it's still points in your favor. 

Is it different in college? 

[-]mikbp20

Very interesting!

Obvious question: who wins when the debate is ultra BS Vs ultra BS? Is then the duel back to a rhetoric one?

Glad you enjoyed!

So, I know this sounds like a bit of a cop out, but hear me out. The better debater usually wins the debate, irrespective of techniques. 

There's a lot that goes into a debate. There's how well you synergize with your partner, how confident you sound, how much research you've prepared, how strong of a writer you are... etc. There are times where a good constructive speech can end the debate before your opponent even starts talking, and other times where adamant refusal to accept the facts can convince the judge you're right. There's also sheer dumb luck. (Did the judge pay attention?)

I think of it as a lot like poker, in that regard. Ultra-BS is one of many techniques you'd use, like a poker face. It's not a silver bullet or a free win though (as powerful as it is). Some of our rounds were very close. 

If two people both have a poker face, who wins? 

Well... I can't say for sure, but I'd conclude neither side has an advantage over the other. (unless, of course, one person knows the technique better!) 

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Good post, thank you. I imagine to go undefeated, you must excel at things beyond the dark arts described (in my experience, some judges refuse to buy an argument no matter how poorly opponents respond)? How much of your success do you attribute to 1) your general rhetorical skills or eloquence, and 2) ability to read judges to gauge which dark arts they seem most susceptible to?

Mhm, yes! Of course. 

So, this may seem surprising, but I'd consider Dark Arts to be a negligible part of me being undefeated. At least, in the sense that I could've easily used legitimate arguments and rebuttals instead to the same effect. 

As you might already know, lay judges tend to judge far more based off speaking skill, confidence, body language, and factors other than the actual content of the argument. In that sense being the better debater usually gets you a win, regardless of the content of your argument, since the judge can't follow anything except for the most apparent 'wins' and 'losses' on either side. All else being equal (and in debate, it usually is, since debaters usually steal good arguments until everyone is running similar cases) we should expect the better debater to win. 

So why use the Dark Arts? Well... it may sound a little disappointing, but really, it's just laziness. Neither me nor my partner wanted to go through the trouble of researching a good case. I had college apps, among other things, and he had his own commitments. The ability to BS my way out of an impossible situation thus allows me to skimp out on prep time in favor of arguing on the fly. Did this make me a 'better' debater? Kind've, in the sense that I can do far more with far less strong of a case, but then at the same time I'd much rather run a bulletproof case (only, of course, if I didn't have to research it myself). The Dark arts saved my ass in this situation, since my case was garbage, but if I knew ahead of time I couldn't use them I'd have just made a good case instead. 

I still think the concept is helpful, which is why I've posted it, but if your goal is to maximize your debate victories rather than time spent prepping, I'd recommend you just do more prep and speaking drills. It tends to pay off. The Dark Arts is not your first choice for consistent, high level victories. 

I feel like the writer went to a great effort to re explain what sophistry is, expressing it's own hatred towards the phenomena while doing it.