All of Lyrongolem's Comments + Replies

Excellent post! I found the starcraft fairly amusing. Though, I am curious. Doesn't your analogy of starcraft solve the issue with trapped priors? 

Like you said, players who played all three factions mostly agree that all factions tend to be roughly similar in difficulty. However, to play all 3 factions you must arbitrarily start off playing one faction. If such people had their priors completely trapped then they wouldn't be able to change their mind after the first game, which clearly isn't true. 

I feel like even if two people disagree in theor... (read more)

Glad you enjoyed! 

Let me send a PM regarding a dialogue... 

But point 3 was already a counterfactual by your own formulation of it. 

Well, no, it's not. Because I am speaking about future events (ie: should we give aid or not), not past events. 

I suppose that if you are convinced that Ukraine is going to win, then a marginal increase in aid is expected to shorten the war, but there is no reason to suspect that proponents of point 3 mean are referring to marginal adjustments in the amount of help

I'm not. Current battlefield conditions suggest that the war will be a protracted stalemate favoring Russia absen... (read more)

I understand how you use the terms, but my point is that Vivek does not in fact demonstrate the information gap you impute to him. I am confident he would be easily able to address your objections.

Ok. Let me address this then. 

The fact that the war has persisted for so long seems sufficient proof that, in the absence of the aid, Ukraine would have quickly surrendered or at worst suffered a quick defeat. In either case, the war would have been shorter. Point 3 is unambiguously correct, and even most people on your side of the issue would agree with tha

... (read more)
1Cornelius Dybdahl
But point 3 was already a counterfactual by your own formulation of it. The claim that giving aid is prolonging the war is implicitly a comparison to the counterfactual in which aid isn't given. I suppose that if you are convinced that Ukraine is going to win, then a marginal increase in aid is expected to shorten the war, but there is no reason to suspect that proponents of point 3 mean are referring to marginal adjustments in the amount of help, and I think there are limits to how uncharitably you can impute their views before you are the one engaging in dark arts. From the standpoint of someone like Vivek — or for that matter from the standpoint of someone who understands how present resources can be converted into revenue streams and vice versa — additional donations to the war effort do constitute an intensification of aid, even if the rate of resource transfers remain the same. Supposing for the sake of argument that his analysis is conventionally unqualified, it does not imply that he has insufficient evidence to hold the position he does. A lot of evidence can be gleaned from which geopolitics experts said what, from which ones changed their mind, and the timing of when they did so, etc. In addition, this being a war of attrition as you pointed out, the key determination to make is who is better situated to win that war of attrition. How many able-bodied, working-age men does Ukraine have left, again? But by the epistemic standards you have implied, he would need to be a domain expert to hold an opinion, which would leave him strikingly vulnerable to ultra-BS, and more importantly, would cede the whole playing field to technocracy from the get-go. Vivek is part of what could be called the "anti-expert faction".

Yes. This analysis primarily applies to low information environments (like the lay circuits I participated in). I would not use this on for example, the national circuit. 

Sort of, but you're missing my main point, which is simply that what Vivek did is not actually dark arts, and that what you are doing is. His arguments, as you summarised them into bullet points, are topical and in good faith. They are at worst erroneous and not an example of bullshitting.

Ah, ok. Allow me a clarification then. 

In typical terms, ultra-BS is lying. (as in, you know you are wrong and speak as if you're right anyways). In my view, however, there's also an extension to that. If you are aware that you don't have knowledge on a topic and mak... (read more)

0Cornelius Dybdahl
I understand how you use the terms, but my point is that Vivek does not in fact demonstrate the information gap you impute to him. I am confident he would be easily able to address your objections. The fact that the war has persisted for so long seems sufficient proof that, in the absence of the aid, Ukraine would have quickly surrendered or at worst suffered a quick defeat. In either case, the war would have been shorter. Point 3 is unambiguously correct, and even most people on your side of the issue would agree with that (ie. they believe that a large part of the reason Ukraine has been able to fight so long has been the aid) There are lots of people of the realist school of geopolitics who know a lot about the specific situation in Ukraine and who nevertheless at least claim to believe 2. Are they all liars? I don't think so. I guess you could argue that they are all unreasonable and thus capable of believing it despite contrary evidence, but such a stance is again merely arguing that point 2 is erroneous, not that it is dark arts. No. Your position was already quite clear from the original post. It's just incorrect, not unclear.

Have you given even a moment's thought to what Vivek might say in response to your objections? I get the impression that you haven't, and that you know essentially nothing about the views of the opposing side on this issue.

Well... yes. It's essentially covered by what I went over. In my view at least, me and Vivek have a narrative disagreement, as opposed to a dispute over a single set or series of facts. In any case, I imagine the points of contest would be

  1. The benefit of Ukraine aid for US foreign policy
  2. The costs imposed on the US 
  3. Moral concerns with
... (read more)
3Cornelius Dybdahl
Sort of, but you're missing my main point, which is simply that what Vivek did is not actually dark arts, and that what you are doing is. His arguments, as you summarised them into bullet points, are topical and in good faith. They are at worst erroneous and not an example of bullshitting. You have convinced yourself that if he were to contend with your objections, he'd resort to surface level arguments about battlefield outcomes, pressing domestic concerns, etc., which actually would fall under your category of ultra-bullshit. Ie. you did in fact assume that he does not have substantive arguments in favour of, say, paleoconservative geopolitical principles, and you accuse him of practising dark arts simply on account of the response you assume he would come up with.

Mhm, yes! Of course. 

So, this may seem surprising, but I'd consider Dark Arts to be a negligible part of me being undefeated. At least, in the sense that I could've easily used legitimate arguments and rebuttals instead to the same effect. 

As you might already know, lay judges tend to judge far more based off speaking skill, confidence, body language, and factors other than the actual content of the argument. In that sense being the better debater usually gets you a win, regardless of the content of your argument, since the judge can't follow any... (read more)

Hm? Is it? Feel free to correct me if I'm wrong, but in my experience flow judges (who tend to be debaters) tend to grade more on the quality of the arguments as opposed to the quality of the evidence. If you raise a sound rebuttal to a good argument it doesn't score, but if you fail to rebut a bad argument it's still points in your favor. 

Is it different in college? 

Mhm, yes

I think society has a long way to go before we reach workable consensus on important issues again. 

That said, while I don't have an eye on solutions, I do believe I can elaborate a bit on what caused the problem, in ways I don't usually see discussed in public discourse. But that's a separate topic for a separate post, in my view. I'm completely open to continuing this conversation within private messages if you like though. 

Thanks for reading!

After reading this and your dialogue with Isusr, it seems that Dark Arts arguments are logically consistent and that the most effective way to rebut them is not to challenge them directly in the issue.

Not quite. As I point out with my example of 'ultra-BS', much of the Dark Arts as we see in politics is easily rebuttable by specific evidence. It's just simply not time efficient in most formats. 

jimmy and madasario in the comments asked for a way to detect stupid arguments. My current answer to that is “take the argument to its logic

... (read more)

Thanks for the update! I think this is probably something important to take into consideration when evaluating ASI arguments. 

That said, I think we're starting to stray from the original topic of the Dark Arts, as we're focusing more on ASI specifically rather than the Dark Arts element of it. In the interest of maintaining discussion focus on this post, would you agree to continuing AGI discussion in private messages? 

2[anonymous]
Sure. Feel free to PM. And I was trying to focus on the dark arts part of the arguments.  Note I don't make any arguments about ASI in the above, just state that fairly weak evidence should be needed to justify not doing anything drastic about it at this time, because the drastic actions have high measurable costs.  It's not provable at present to state that "ASI could find a way to take over the planet with limited resources, because we don't have an ASI or know the intelligence ROI on a given amount of flops", but it is provable to state that "an AI pause of 6 months would cost tens of billions, possibly hundreds of billions of dollars and would reduce the relative power of the pausing countries internationally".  It's also provable to state the damage of a nuclear exchange. Look how it's voted down to -10 on agreement : others feel very strongly about this issue.

It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it

... (read more)
3Valentine
Ah! This distinction helped clarify a fair bit for me. Thank you!   I agree on all accounts here. I think I dumped most of my DADA skill points into implicit detection. And yes, the vibes thing isn't a perfect correlation to Dark stuff, I totally agree.   It's definitely helpful! The category still isn't crisp in my mind, but it's a lot clearer. Thank you!   I've really enjoyed this exchange too. Thank you! And sure, I'd be up for a dialogue sometime. I don't have a good intuition for what kind of thing goes well in dialogues yet, so maybe take the lead if & when you feel inspired to invite me into one?

So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.

Yep, pretty much. Part of the technique is knowing the ins and outs of our own argument. As I use ultra-BS prominently in debate, I need to be able to rebut the argument when I'm inevitably forced to argue the other side. I thus draw the distinction between ultra-BS along these lines. If it's not obviously wrong (to me, anyways) it's speculation. I can thus say that extended Chinese real economic stagnation for the next 10 years is educated speculation, while imm... (read more)

6[anonymous]
So in this particular scenario, those concerned about ASI doom aren't asking for a small or reasonable policy action proportional to today's uncertainty.  They are asking for AI pauses and preemptive nuclear war.   Pause: https://futureoflife.org/open-letter/pause-giant-ai-experiments/   Nuclear war: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ 1.  AI pauses will cost an enormous amount of money, some of which is tax revenue.   2. Preemptive nuclear war is potential suicide.  It's asking for a country to risk the deaths of approximately 50% of it's population in the near term, and to lose all it's supply chains, turning it into broken third world country separated by radioactive craters on all the transit and food supply hubs, which would likely kill a large fraction of it's remaining citizens.             To justify (1) you would need to have some level of evidence that the threat exists.  To justify (2) I would expect you would need beyond a shadow of a doubt evidence that the threat exists.   So for (1) convincing evidence might be  a weak ASI that is hostile needs to exist in the lab before the threat can be claimed to be real.   For (2) researchers would need to have produced in an isolated lab strong ASI, demonstrated that they were hostile, and tried thousands of times to make a safe ASI with a 100% failure rate. I think we could argue about the exact level of evidence needed, or briefly establish plausible ways that (1) and (2) could fail to show a threat, but in general I would say the onus is on AI doom advocates to prove the threat is real, not on advocates for "business as usual" technology development to prove it is not.  I think this last part is the dark arts scam, that and other hidden assumptions that get treated as certainty.  (a lot of the hidden assumptions are in the technical details of how an ASI is assumed to work by someone with less detailed technical knowledge, vs the way actual ML systems work today) An

Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three

Completely fair. Maybe I should share a few then? 

I find Money & Macro (economics youtuber with Ph.d in the field) to be a highly reliable source capable of informed and nuanced reporting. Here is, for instance, his take on the Argentine dollarization plan, which I found much more comprehensive than most media sources. 

Argentina's Radical Plan t... (read more)

Right, about this. So the overall point of the Ramaswamy example was to illustrate how subject specific knowledge is helpful in formulating a rebuttal and distinguishing between bullshit and non-bullshit claims. 

See for example, this comment

This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn't really mean a country isn't about to give up. Maybe it's the reason they are about to give up; there's always a line, and whos to say it isn't in the hundreds of thousands? Zelensky having popular support doe

... (read more)
2jimmy
I think "subject specific knowledge is helpful in distinguishing between bullshit and non-bullshit claims." is pretty clear on its own, and if you want to add an example it'd be sufficient to do something simple and vague like "If someone cites scientific studies you haven't had time to read, it can sound like they've actually done their research. Except sometimes when you do this you'll find that the study doesn't actually support their claim". "How to formulate a rebuttal" sounds like a very different thing, depending on what your social goals are with the rebuttal. Yeah, you're kinda stuck between "That's too obvious of a problem for me to fall into!" and "I don't see a problem here! I don't believe you!". I'd personally err on the side of the obvious, while highlighting why the examples I'm picking are so obvious. Yeah, I think that'd require a pretty big conversation and I already agree with the point you're trying to use it to make.

Very nice! Now... here's the catch. Some of my arguments relied on dark arts techniques. Others very much don't. I can support a generally valid claim with an invalid or weak argument. I can do the same with an obviously invalid claim. Can you tell me what specifically I did? No status points for partially correct answers!

Now, regarding learned helplessness. Yes, it's similar, though I'd put in an important caveat. I consider discerning reliable sources and trusting them to be a rational decision, so I wouldn't go as far as calling the whole ordeal of find... (read more)

1DusanDNesic
Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three Some people on this website get that for some topics, acoup blog does that for history, etc, but it's really rare, and mostly you end up with "listen to Radio Liberty and Pravda and figure out the truth if you can." On a style side, I agree with other commenters that you have selected something where even after all the reading I am severely not convinced your criticism is correct under every possible frame. Picking something like a politician talking about the good they have done, despite actually being corrupt or something much more narrow in focus and black-and-white, leaving you much less surface to defend. Here, it took a lot of text, I am unsure what techniques I have learned since your criticisms require more effort to again check for validity. You explained that sunk cost fallacy pushed you for this example, but it's still not too late to add a different example, put this one into Google doc and make it optional reading and note your edit. People may read this in the future, and no reason not to ease the concept for them!

Thanks for reading!

Understood. I think this is a consensus among many comments, so probably something I should work on. I've broadened things to be a bit too general, and the result was that I couldn't bring out much in the way of specific insights, as on a bigger more general level much of this is obvious. 

I should probably make follow up posts addressing nerd sniping and other aspects, it would likely be more helpful. Staying within the realm of learned experiences is probably also a good call. 

In any case, thanks for the feedback! I'll do my best to act on it in subsequent posts. 

Thanks for your comment! 

Hm... right. Yes, I focused a lot on combating the Dark Arts, but not as much on identification. Probably worthy of it's own post. But my schedule is packed. We'll see if I get to it. 

Regarding defense tools, I'm a little mixed. I think traditional defenses like (relatively) trustworthy institutions, basic fact checks, and common sense are still quite viable, but at the end of the day even something as powerful as current day GPT is hardly a substitute for genuine research. A first line of defense and heuristics are good, but imo there has to be some focus on understanding the subject matter if we do want to send the Dark Artisan packing. 

Hm? I'm unsure if I presented my point correctly, but my intent was to show that aid in general tends to not resolve the problems causing poverty, irrespective of cost/benefit. I think I brought this up in another comment, comparing it to painkillers. If your leg is broken a painkiller will probably help, cost effective or not. But your leg is still broken, at the end of the day, and the painkiller doesn't actually 'solve' the problem in the same way a surgery and a splint would. 

Do you take issue with this? 

On that note I do believe many EA char... (read more)

1Bohaska
I do believe your main point is correct, just that most people here already know that.

Oooh, I think I can classify some of this! 

A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning

... (read more)
2Valentine
Yep, I think you're basically right on all accounts. Maybe a little off with the atheist fellow, but because of context I didn't think to share until reading your analysis, and what you said is close enough! It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way." The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me. If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?

Of course. Glad you enjoyed! 

I think that part of it is probably you not having much experience with debate or debate adjacent fields. (quite understandable, given how toxic it's become). It took me some lived experience to recognize it, after all. 

If you want to see it at work, I recommend just tuning into any politician during a debate. I think you'll start recognizing stuff pretty quick. Wish you happy hunting in any case. 

Interesting question!

So, I think the difference is that ASI is ultimately far more difficult to prove on either side. However, the general framework maps pretty similarly. 

Allow me to compare ASI with another X-risk scenario we're familiar with, cold war MAD. The general argument goes:

The Cold War argument is:

         (1) USSR improves and builds thousands of non-hypersonic nuclear tipped missiles.  (did actually happen)

        (2) USSR decides to risk nuclear annihilation by killing all it's rivals on

... (read more)
3[anonymous]
So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.  ASI doom or acceleration arguments are speculation where we don't know which argument is wrong, since we don't have access to an ASI.  While for example we do know it's difficult to locate a quiet submarine, it's difficult to stop even subsonic bombers, it's difficult to react in time to a large missile attack, and we have direct historical examples of all this in non-nuclear battles with the same weapons.  For example the cruise missiles in Ukraine that keep impacting both side's positions are just a warhead swap from being nuclear.   With that said, isn't "high confidence" in your speculation by definition wrong?  If you don't know, how can you know that it's almost certain an ASI will defeat and kill humanity?  AGI Ruin: A List of Lethalities and https://twitter.com/ESYudkowsky/status/1658616828741160960 .  Of course thermodynamics has taken it's sweet time building ASI : https://twitter.com/BasedBeffJezos/status/1670640570388336640  If you don't know, you cannot justify a policy of preemptive nuclear war over AI.  That's kinda my point.  I'm not even trying to say, object level, whether or not ASI actually will be a threat humans need to be willing to go to nuclear war over.  I am saying the evidence right now does not support that conclusion.  (it doesn't support the conclusion that ASI is safe either, but it doesn't justify the most extreme policy action)

Hello, and thank you for the comment! 

So, regarding policy discussions and public discourse, I think you can roughly group the discussion pools into two categories. Public and expert level discussions.

While the experts certainly aren't perfect, I'd contend in general you find much greater consensus on higher level issues. There may be, for example, disputes on when climate change become irreversible, to what extent humans would be impacted, or how to best go about solving the larger problem. But you will rarely (if ever) find a climate scientist claim... (read more)

2Christian Nordtømme
Thanks. That all makes sense. I don’t really have any good ideas. As such it’s actually a bit comforting to hear I’m not alone in that. I’m not entirely pessimistic, however; it just means I can’t think of any quick fixes or short cuts. I think it’s going to take a lot of work to change the culture, and places like Lesswrong are good starting points for that. For example, I agree that it’s probably best if we can make it okay for the public to trust experts and institutions again. However, some experts and institutions have made that really hard. And so different institutions need to put in some work and put in place routines – in many cases significant reforms – to earn back trust. And in order to trust them, the general public needs to learn to change their idea of trust so that it makes allowances for Hanlon’s Razor (or rather Douglas Hubbard’s corollary: “Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.”) I get disheartened when I see the media and its consumers act all outraged and seemingly very surprised by people being people, with flaws. A bit ironically, considering we’re living through an age with an abundance of communication and information: Alongside institutional reforms, I think there’s a need for some really good, influential communication (infotainment?) that can reach deep into the public attitudes – beyond just college-educated elites and aspirants – and give people new, helpful perspectives. Something that can help create a common language and understanding around concepts like epistemology, public trust and verification, in much the same way the movie The Matrix gave everyone a way to think and talk about Cartesian mind-body split, without using those words (but sans the dystopian, conspiratorial, darkly revolutionary undercurrent, please). Most things I come across that seems to aspire to something like that today is typically overtly moralizing,

Hm... I'm not too sure how much I agree with this, can you raise an example of what you mean? 

In my experience while uncritical reading of evidence rarely produces a result (uncritical reading in general rarely ever does) close examination of facts usually leads to support for certain broad narratives. I might bring up examples of flat earth and vaccines. While some people don't believe in the consensus, I think they are the exception that proves the rule. By and large people are able to understand basic scientific evidence, or so I think. 

Do you believe otherwise? 

Thanks for the addition! I actually didn't consider this, and neither did my opponents. 

Glad you enjoyed!

So, I know this sounds like a bit of a cop out, but hear me out. The better debater usually wins the debate, irrespective of techniques. 

There's a lot that goes into a debate. There's how well you synergize with your partner, how confident you sound, how much research you've prepared, how strong of a writer you are... etc. There are times where a good constructive speech can end the debate before your opponent even starts talking, and other times where adamant refusal to accept the facts can convince the judge you're right. There's al... (read more)

Yep! It's very similar. The weakness it exploits (lack of time to properly formulate a response) is the same, but the main difference is that your avenue of attack is a believable narrative rather than multiple pieces of likely false information the judge can't understand either. (it's why I prefer ultra-BS, as opposed to a flood of regular BS). 

Mhm? Right, in my personal opinion I don't consider kritiks/theory as ultra-BS. This is mainly because ultra-BS is intuitive narrative framing, and usually not too complicated (the idea is to sound right, and avoid the trouble of actually having to explain yourself properly). Kritiks/theory are the opposite, if that makes sense. They're highly technical arguments that don't make sense outside of debate specific settings, which most lay judges simply won't understand. In my experience it's almost never a good idea to run them unless you're with a tech or a ... (read more)

1utilistrutil
A lot of this piece is unique to high school debate formats. In the college context, every judge is themself a current or previous debater, so some of these tricks don't work. (There are of course still times when optimizing for competitive success distracts from truth-seeking.)

Mhm! Unsure if you saw, but I made a post.

Defense Against The Dark Arts: An Introduction — LessWrong

Could I have your thoughts on this? 

1Richard Horvath
Thanks for pinging me. Haven't noticed it yet, will read it now.

Hm... right. I think your critiques are pretty on point in that regard. I may have diluted focus too much and sacrificed insight for a broad overview. Focus on a more specific technique is probably better. 

I have a few ideas in mind, but I thought I'd get your opinion first. Do you think there's any part of this post that warrants more detailed explanation/exploration with greater focus? 

The glib answer to how to avoid falling victim to the Dark Arts is to just be right, and not let counterarguments change your mind. Occlumency, if you like.

Well, yes, but I'm unsure if this is too helpful. Part of the intention behind my post was to distill what I viewed as potentially useful advice. Do you have any? If not, that's fine, but I'm unsure if it's too valuable for the readership. 

One problem is the bullshit asymmetry principle, which you describe but don't call by name: rebutting narratives through analyses of individual claims is infeasi

... (read more)

Thanks so much for your feedback!

Hm... right. I did get feedback warning that the Ramaswamy example was quite distracting (my beta reader reccomended flat eartherism or anti-vaxxing instead). In hindsight it may have been a better choice, but I'm not too familiar with geology or medicine, so I didn't think I could do the proper rebuttal justice. The example was meant to show how proper understanding of a subject could act as a very strong rebuttal against intuitive bullshit, but then I think I may not have succeeded in making that point. I think this was a... (read more)

5jimmy
My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?". After going back and skimming a bit, it's still not clear to me that they're not. The thing is, this applies to you as well. Looking at this bit, for example: This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn't really mean a country isn't about to give up. Maybe it's the reason they are about to give up; there's always a line, and whos to say it isn't in the hundreds of thousands? Zelensky having popular support does seem to support your point, and I could go check primary sources on that, but even if I did your point about "selecting the right facts and omitting others" still stands, and there's no easy way to find out if you're full of shit here or not. So it's kinda weird to see it presented as if we're supposed to take your arguments at face value... in a piece purportedly teaching us to defend against the dark art of bullshit. It's not clear to me how this section even helps even if we do take it at face value. Okay, so Ramaswamy said something you disagree with, and you might even be right and maybe his thoughts don't hold up to scrutiny? But even if so, that doesn't mean he's "using dark arts" any more than he just doesn't think things through well enough to get to the right answer, and I don't see what that teaches us about how to avoid BS besides "Don't trust Ramaswamy". To be clear, this isn't at all "your post sucks, feel bad". It's partly genuine curiosity about where you were trying to go with that part, and mostly that you seem to genuinely appreciate feedback. My own answer to "how to defend against bullshit" is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take
5Stuart Johnson
I think most of the best posts on this website about the dark arts are deep analyses of one particular rhetorical trick and the effect it has on a discussion. For example, Setting the Zero Point or The noncentral fallacy - the worst argument in the world? are both discussions about hypothesis privilege that rely on unstated premises. I think reading these made me earnestly better at recognising and responding to Dark Arts in the real world. Frame Control and its response, Tabooing "Frame Control" are also excellent reads in my opinion.

Hello, and thanks for the comment!

Hm... yes, the central narrative is always hard to rebut. But since no argument exists independently of the facts, I thought I would focus on verification of factual information. I found the methods I used helpful in that regard. I'm sorry it didn't work for you, but then, I'm not claiming that it would work for everyone in all situations. These are the methods I personally found helpful. The algorithmic solution (ie: actually learning about the topic yourself) has been what I consider the only reliable defense. Even if yo... (read more)

3Shankar Sivarajan
The glib answer to how to avoid falling victim to the Dark Arts is to just be right, and not let counterarguments change your mind. Occlumency, if you like. One problem is the bullshit asymmetry principle, which you describe but don't call by name: rebutting narratives through analyses of individual claims is infeasibly expensive. But far worse is answering the wrong question, letting the enemy choose the battlefield. Sticking with the war in Ukraine for an example, it'd be like answering the question of why Russia would blow up its own pipeline (Is Putin stupid? Is it like Cortés burning his ships? Is it the Wagner Group trying to undermine Putin?) instead of saying, "Wtf? No, it's obviously the US."  As I said, I don't know how one can consistently recognize traps like this. It seems exceedingly difficult to me, but that's what an actual defense would look like. To clarify my point about the Snake Island massacre: yeah, I think the audio was legit too. No, I believe the Ukraine government knew they were alive (or at least had good reason to think so), and pretended otherwise for propaganda reasons. Can I prove this? No, I don't in fact have access to high-level military intelligence. This is the trap I'm warning against! Getting bogged down trying to ascertain exactly what the Ukrainian military knew and when they knew it is missing the point, which is whether or not they're incentivized to deceive you, and so whether you should trust anything they say, one way or the other. The same goes for your ad hoc determinations of which states are "legitimate," based on considerations of "international law," your personal moral views regarding "democracy," and expedients of maintaining US hegemony. You're answering the wrong question. Happily, in this case, I've figured out the correct answer: there is no such thing as a morally legitimate state. 

I'm glad you enjoyed it!

In particular, it highlights a gap in my way of reasoning. I notice that even after you give examples, the category of "ultra-BS" doesn't really gel for me. I think I use a more vague indicator for this, like emotional tone plus general caution when someone is trying to persuade me of something.

Hm... this is interesting. I'm not too sure I understand what you mean though. Do you mind providing examples of what categories and indicators you use? 

I think I'm missing something obvious, or I'm missing some information. Why is this

... (read more)
[anonymous]100

Is the "ASI doom" argument meaningfully different or does it pattern match to "Ultra BS".

The ASI doom argument as I understand it is:

       (1) humans build and host ASI

       (2) the ASI decides to seek power as an instrumental goal

       (3) without the humans being aware of it, or being able to stop it, the ASI gains 

               (a) a place to exist and think at all  (aka thousands of AI inference cards interconnected)

      &nb... (read more)

2Valentine
Ah, interesting, I didn't read that assumption into it. I read it as "The power balance will have changed, which will make Russia's international bargaining position way stronger because now it has a credible threat against mainland USA." I see the thing you're pointing out as implicit though. Like an appeal to raw animal fear.   That makes a lot of sense. I didn't know about the distributed and secret nature of our nuclear capabilities… but it's kind of obvious that that's how it'd be set up, now that you say so. Thank you for spelling this out.   Makes sense! And I wasn't worried. I'm actually not concerned about sounding like (or being!) an idiot. I'm just me, and I have the questions I do! But thank you for the kindness in your note here.   I gotta admit, my faith in the whole system is pretty low on axes like this. The collective response to Covid was idiotic. I could imagine the system doing some stupid things simply because it's too gummed up and geriatric to do better. That's not my main guess about what's happening here. I honestly just didn't think through this level of thing when I first read your arctic argument from your debate. But collective ineptitude is plausible enough to me that the things you're pointing out here just don't land as damning. But they definitely are points against. Thank you for pointing them out!   For this instance, yes! There's some kind of generalization that hasn't happened for me yet. I'm not sure what to ask exactly. I think this whole topic (RE what you're saying about Dark Arts) is bumping into a weak spot in my mind that I wasn't aware was weak. I'll need to watch it & observe other examples & let it settle in. But for this case: yes, much clearer! Thank you for taking the time to spell all this out!
4Valentine
I can try to provide examples. The indicators might be too vague for the examples to help much with though! A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning down the moves he was doing, but I could tell I felt a kind of pressure, like I was being socially & logically pulled into a boxing ring. I realized after a few beats that he must have interpreted what I was saying as an assertion that God (as he thought others thought of God) is real. I still don't know what rhetorical tricks he was doing, and I doubt any of them were conscious on his part, but I could tell that something screwy was going on because of the way interacting with him became tense and how others around us got uneasy and shifted how they were conversing. (Some wanted to engage & help the logic, some wanted to change the subject.) Another example: Around a week ago I bumped into a strange character who runs a strange bookstore. A type of strange that I see as being common between Vassar and Ziz and Crowley, if that gives you a flavor. He was clearly on his way out the door, but as he headed out he directed some of his… attention-stuff… at me. I'm still not sure what exactly he was doing. On the surface it looked normal: he handed me a pamphlet with some of the info about their new brick-and-mortar store, along with their online store's details. But there was something he was doing that was obviously about… keeping me off-balance. I think it was a general social thing he does: I watched him do it with the young man who was clearly a friend to

Right, probably a good idea. Let me edit and add this to the top... 

Thanks so much! 

The format wasn't intentional by the way, I copy and pasted from google docs. No wonder it looked wierd. 

Glad you enjoyed! Now you mention it, I think I might make a continuation post sometime. Would you mind giving me a few ideas on what sort've dark artsy techniques I should cover, or what you're curious about in general? 

9Richard Horvath
I think something along the lines of "Defense Against the Dark Arts" with actionable steps on recognizing and defusing them (and how to practice these) would be great. If you feel like you have the energy and time, more articles on offensive usage (practice) and on theoretical background (how to connect your practical experience to existing LW concepts/memes) would be also nice. But I think the first one (defense) would be the most useful for LW readers.

It seems to me ultra-BS is perhaps continuous with hyping up one particular way that reality might in fact be, in a way that is disproportionate to your actual probability, and that is also continuous with emphasizing a way that reality might in fact be which is actually proportionate with your subjective probability.

 Yep! I think this is a pretty good summary. You want to understand reality just enough to where you can say things that sound plausible (and are in line with your reasoning) but omit just enough factual information to where your case isn... (read more)

I've sent you his Discord information via PM. (After obtaining permission, of course.

Thank you very much! I think I'll enjoy the chat. Just sent him the friend request. Oh, and, my discord is the same as my lesswrong btw.

Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about

YES! Hahhahahaa... it's quite dumb. The information you can reasonably convey in 4 minutes is so short that even when your case is common sense it's hard to actually prove your point. I can bring up a variety of c... (read more)

This was super fun to read, thanks for sharing! Hm... your new student seems like an interesting person to talk to. Mind asking if he'd be interested in a chat with someone else his age? I'm also a public form (debate format) debater in high school, and I'm doing prep work for this particular topic on student loans as well. I'd love to get a chance to talk with him a bit, and I feel like he may enjoy it as well. 

On that note, I think I can elaborate a bit on the format a bit in ways others might find helpful. 

Public forum is one of many debate fo... (read more)

3lsusr
I've sent you his Discord information via PM. (After obtaining permission, of course.) XD Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about, which is why I'm much less interested in traditional debate these days. (Not to discourage you, of course. The dark arts are useful.) When teaching Socratic dialogues, the first thing I have to teach is "Don't give arguments you don't actually believe in." There's lots of tricks I use to get around this in real life (mostly betting face, since betting money only works for facts), but they're not allowed in a debate tournament.

Hm... pretty similar here. I also don't have much of a media presence. I haven't tried EA forums yet, mainly because I consider myself intellectually more aligned with LW, but in any case I'm open to looking. This is looking to be a more personal conversation now. Would you like to continue in direct messages? Open to hearing your suggestions, I'm just as clueless right now. 

Likely a good suggestion. I'm in a few communities myself. But then, I'm unsure if you're familiar with how discord works. Discord is primarily a messaging app with public server features tacked on. Not the sort of community for posts like this. Are you aware of any particular communities within discord I could join? The general platform has many communities, much like reddit, but I'm not aware of any similar to lesswrong. 

2Ilio
Nope, my social media presence is very very low. But I’m open to suggestion since I realized there’s a lot of toxic characters with high status here. Did you try EA forums? Is it better?

Many thanks for the kind words, I appreciate it. 

You're probably right. I mainly started on lesswrong because this is a community I'm familiar with, and a place I can expect to understand basic norms. (I've read the sequences and have some understanding of rationalist discourse). I'm unsure how I'd fare in other communities, but then, I haven't looked either. Are you familiar with any? I don't know myself. 

2Ilio
Nope, but one of my son suggests discord.

Thanks for your reply! 

Yes, you're right, I realize I was rather thin on evidence for the link between institutional weakness and corruption. I believe this was like mind fallacy on my end, I assumed the link was obvious. But since clearly it was not allow me to go back and contextualize it.

Disclaimer: It's late and I'm tired, prose quality will be lower than usual, and I'll be prone to some rather dry political jokes. 

To understand the link between institutions and corruption, I think it's helpful just to use simple mental models. Consider this ... (read more)

2Ilio
I waited Friday so that you won’t sleep at school because of me, but yes I enjoyed both style and freshness of ideas! Look, I think you’re a young & promising opinion writer, but if you stay on LW I would expect you’ll get beaten by the cool kids (for lack of systematic engagement with both spirit and logical details of the answers you get). What about finding some place more about social visions and less about pure logic? Send me where and I’ll join for more about the strengths and some pitfalls maybe.

Yes, they are. In the main post my only quote blocks are direct copy/pastes from the web version of the book. 

In my head I rephrased that thesis as poor institutions and practices can impair efficiency totally, which I found as unsurprising as a charity add turns as not entirely accurate. So if you target readers who find this controversial I may just not be the right reader for the feedback you seek.


Right, that makes sense, and it was part of the angle I was taking. When I said controversial I was mainly referring to the more general claim that aid tends to be ineffective in reducing long term poverty, with few exceptions. (the implication being that aid fails to... (read more)

2Ilio
I update for stronger internal coherency and ability to articulate clear and well written stories. That was fun to read! Now I don’t have the same internal frame of reference when it comes to evaluate what counts as evidence. I can accept a good story as evidence, but only if I can evaluate its internal coherency against other good stories one might believe in. Let’s cook one to see what I mean: « In a distant planet far away from here, there was a rich country and a poor country. Then rich country elected religious cranks who decide to start a « war on drugs », whatever that means. What that means turned out to be: a large flow of money in criminal hands, then collapse of the poor country under corruption and political violence. Then rich country look at poor country and says: don’t you think you’ll be richer with better institutions? ». Back to what count as evidence: I can update on one’s perception that this or that good story looks like the real world, especially given you seem to know a lot on this topic. But as with your multiplicative model (insightful!), the amount of update will be proportional to demonstration of knowledge time how hard I feel you explored good contrarian-to-your-own-preferred-view candidate thesis. Here again, we don’t have the same frame for causality. To me, your illustration is a concomitance, and a concomitance can be explained either by: * a causal link from institutions to war and corruption, which means if you could randomize acting on institutions, you could statistically impact war and corruption. * a causal link from a weighted average of war and corruption, which means *if you could randomize acting on war and corruption, you could statistically impact institutions * * an unknown unknown is acting on them all * the random generator is funny So, if I wanted to conclude on option A specifically, I would need to explain why I can ignore the alternatives. Then I could say I have evidence for this or that causal link, not b

(Do you want to prove EA is doom to fail? I don’t think so but that’s one way to read the title.)

Hm... right. That would make sense. I think I can see how people might misread that. No, I had no intention of doing anything like that. I was trying to address the shortcomings of charity, particularly in the realm of structural and institutional rot (and the other myriad causes of poverty). EA charity faces many of the same issues in this regard, but 'doomed to fail' is hardly the point I would like to make. (If anything I try my best to advocate the opposite... (read more)

2Ilio
In my head I rephrased that thesis as poor institutions and practices can impair efficiency totally, which I found as unsurprising as a charity add turns as not entirely accurate. So if you target readers who find this controversial I may just not be the right reader for the feedback you seek. Still, I gave some time thinking at: What could you do to make me update? One way is to beware more about unfairness. Instead of mere illustration of failures when your thesis was ignored, can you also present cases where following this very thesis did make a success? What are the alternative hypothesis that could explain the effect as well? What prediction would make you reject the thesis? The more I’ll see you ask yourself these questions, the more I’ll trust your opinion. Another way is to go one step more precise. What’s the minimal institutions before charity get efficient? How much efficiency do we gain for what progress in institutions? Could you find if institutions explain more variance than, say, war and corruption? The more I’ll know about this kind of things, the more I’ll believe in the parent thesis. Just some thoughts. Good luck with your next text, I expect I’ll like it. ;)

Thanks so much for your comment! 

Hm... yes, upon further reflection your summarization seems accurate, or at least highly plausible. I am not too sure what the mindset of the average LWer or EA looks like myself. (although I've frequented the site for some time, I'm mainly reading random frontpage posts that pique my interest, I don't attend meetups, participate in group activities, or much other things of that nature) It's not merely reading like I haven't engaged much in their world. The truth is I simply haven't, I have no intention of hiding it. I... (read more)

Load More