Will_Sawin comments on Many Weak Arguments vs. One Relatively Strong Argument - Less Wrong

20 Post author: JonahSinick 04 June 2013 03:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: JonahSinick 04 June 2013 06:44:54PM *  7 points [-]

Responses below. As a meta-remark, your comment doesn't steelman my argument, and I think that steelmanning arguments helps keep the conversation on track, so I'd appreciate it if you were to do so in the future.

Penrose is a worrisome case to bring as an example, since he is in fact wrong, and therefore you're giving an example where your reasoning leads to the wrong conclusion.

The point of the example is that one shouldn't decisively conclude that Penrose is wrong — one should instead hedge.

Perhaps a relevant analogy is that of the using seat belts to guard against car accidents — one shouldn't say "The claim that I'm going to get into a potentially fatal car accident is in fact wrong, so I'm not going to wear seat belts." You may argue that the relevant probabilities are sufficiently different so that the analogy isn't a good one. If so, I disagree.

If you can't easily find examples where your reasoning led you to a new correct conclusion instead of new sympathy toward a wrong conclusion, this is worrisome.

There are many such examples. My post extended to a length of eight pages without my going into them, and I wanted to keep the post to a reasonable length. I'm open to the possibility of writing another post with other examples. The reason that I chose the Penrose example is to vividly illustrate the shift in my epistemology.

In general, I tend to flag recounts of epistemological innovations which lead to new sympathy toward a wrong conclusion, as though the one were displaying compassion for a previously hated enemy, for in epistemology this is not virtue.

One would expect this sort of thing to sometimes happen by chance in the course of updating based on incoming evidence. So I don't share your concern.

The Penrose example worries me for other reasons as well, namely it seems like it would be possible to generate hordes and hordes of weak arguments against Penrose; so it's as if because the argument against Penrose is strong, you aren't bothering to try to generate weak arguments; reading this feels like you now prefer weak arguments to strong arguments and don't try to find the many weak arguments once you see a strong argument, which is not good Bayesianism.

I can see how the example might seem disconsonant with my post, and will consider revising the post to clarify. [Edit: I did this.] The point that I intended to make is that I was previously unknowingly ignoring certain nontrivial weak lines of evidence, on the grounds that they weren't strong enough, and that I've recognized this, and have been working on modifying my epistemological framework accordingly.

I don't think that the hordes and hordes of weak arguments that you refer to are collectively strong enough to nullify the argument that one should trust Penrose because he's one of the greatest physicists of the second half of the 20'th century.

You also claim there's a strong argument for Penrose, namely his authority (? wasn't this the kind of reasoning you were arguing against trusting?) but either we have very different domain models here, or you're not using the Bayesian definition of strong evidence as "an argument you would be very unlikely to observe, in a world where the theory is false"

  • I don't remember arguing against trusting authority above – elaborate if you'd like.
  • I wasn't saying that one should give nontrivial credence to Penrose's views based on his authority. I was saying that one should give nontrivial credence to Penrose's views based on the fact that he's a deeper thinker than everybody who I know (in the sense that his accomplishments are deeper than anything that anyone who I know has ever accomplished).
Comment author: Will_Sawin 13 June 2013 06:27:20AM 2 points [-]

Personally I found the quantitative majors example a very vivid introduction to this style of argument, and much more vivid than the Penrose example. I think the quantitative majors does a very good job of illustrating the kind of reasoning you are supporting, and why it is helpful. I don't understand the relevance of many weak arguments to the Penrose debate - it seems like a case of some strong and some weak arguments vs. one weak argument or something. If others are like me, a different example might be more helpful.

Comment author: JonahSinick 13 June 2013 04:11:08PM 3 points [-]

In hindsight, my presentation in this article was suboptimal. I clarify in a number of comments on this thread.

The common thread that ties together the quantitative majors example and the Penrose example is "rather than dismissing arguments that appear to break down upon examination, one should recognize that such arguments often have a nontrivial chance of succeeding owing to model uncertainty, and one should count such arguments as evidence."

In the case of the quantitative majors example, the point is that you can amass a large number such arguments to reach a confident conclusion. In the Penrose example, the point is that one should hedge rather than concluding that Penrose is virtually certain to be wrong.

I can give more examples of the use of MWAs to reach a confident conclusion. They're not sufficiently polished to post, so if you're interested in hearing them, shoot me at email at jsinick@gmail.com.

Comment author: RogerS 19 June 2013 06:14:08PM 0 points [-]

Perhaps "hedging" is another term that also needs expanding here. One can reasonably assume that Penrose's analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you've explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from "strong" to "weak". Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren't.