- A very important fact which just came to my attention is that people do not tend to sum or take the max reasonableness of arguments for P to form a judgement about P, rather they tend to take the average.
- This is a somewhat reasonable heuristic in some situations. For instance, if someone gives you a really unreasonable argument for P this is evidence that their judgement of arguments isn’t very good, and so their best argument is more likely to be secretly bad.
- Similarly, it is evidence that they are motivated to convince you even using faulty arguments, which is generally speaking a bad sign.
- It has important implications. Sometimes people think “oh I will make 50 ok arguments for P instead of one really good one” but most folks are not very impressed by this, even though they should be.
- Relatedly, if you try to turn a complicated thesis T into a social movement, the average reasonableness of an argument in favor of T will plummet, and so you may very quickly find that everyone perceives the anti-T-ers as being much more reasonable.
- This will probably still be true even if the best pro-T arguments are very good, and especially true if the best pro-T arguments are subtle or hard to follow.
- Yes, this is about ai risk. I don’t think this is a slam dunk argument against trying to make ai-risk-pilled-ness into a popular social movement, but it is a real cost, and nearly captures the shape of my real worries.
- The best version of my real worries, I also have real worries that are not nearly as defensible or cool.
- Oh actually probably they take the min rather than the average unless they like you, in which case they take the max.
First of all, is this important fact actually true? I'd love to know. Reviewing my life experience... it sure seems true? At least true in many circumstances? I think I can think of lots of examples where this fact being true is a good explanation of what happened. If people have counterarguments or sources of skepticism I'd be very interested to hear in the comments.
Secondly, I concluded a while back that One Strong Argument Beats Many Weak Arguments, and in Ye Olden Days of Original Less Wrong when rationalists spent more time talking about rationality there was a whole series of posts arguing for the opposite claim (1, 2, 3). Seems possibly related. I'd love to see this debate revived, and tied in to the more general questions of:
(A) Does rationality in practice recommend aggregating the quality of a group of arguments for a claim by taking the sum, the max, the min, the mean, or what? (To be clear, obviously the ideal is more complicated & looks more like Bayesian conditionalization on a huge set of fleshed-out hypotheses. But in practice, when you don't have time for that, what do you do?)
(B) What do people typically do, and on what factors does that depend--e.g. do they take the min if they don't like you or the claim, and take the max if they do?
Finally: Steven Adler pointed me to this paper that maybe provides some empirical evidence for Ronny's claim.
I think it really depends on the situation. Ideally, you'd take the best argument on offer for both positions, but this assumes arguments for both positions are equally easy for you to find (with help from third parties, not necessarily optimizing [well] for you making good decisions). I think in practice I try to infer what the blind-spots and spin-incentives of the arguments I hear are, and try to think about what world we'd have to live in in order for these lines of arguments to be the ones which I end up hearing about via these sources.
Never do I do any kind of averaging or maximizing thing, and although what I said above sounds more complicated than saying "average!" or "maximize!", it mostly just runs in the background, on autopilot, at this point, so it doesn't take all that much extra time to implement. So I think its a false-dichotomy.
In some sense, one strong argument seems like it should defeat a bunch of weak arguments, but this assumes you're in a situation you never actually find yourself in in real life[1]. In reality, once you have one strong argument and a bunch of weak arguments, now begins the process of seeing how far you can take the weak arguments and turn them into strong arguments (either by thinking them yourself or seeking out people who seem to be convinced by the weak-to-you versions of the arguments). And if you can't do this, you should evaluate how likely you think it is you can make one of those weaker arguments stronger (either by some learned heuristics about what sorts of weak arguments are shadows of stronger ones, or looking at their advocates, or those who've been convinced, or the incentives involved, etc.).
Although the original text talks about policies very specifically, I think this is also the case when trying to reason about progressively more accurate abstractions of the world. What you're really deciding on is which line of research inquiry to devote more thought to, with little expectation that either hypothesis on offer will be a truly general theory for the true hypothesis (even if---especially if---it is able to be developed into a truly general theory with a bit (or a lot) of work). ↩︎