komponisto comments on What topics would you like to see more of on LessWrong? - Less Wrong

25 Post author: Emile 13 December 2010 04:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (137)

You are viewing a single comment's thread.

Comment author: komponisto 13 December 2010 11:07:56PM 7 points [-]

Applied epistemic rationality:

Using the techniques of rationality and the language of Bayesian probability theory to help ourselves and each other sort out truth from falsehood in the world out there.

I.e., more stuff like this. (I've done mine, and am eager to participate in someone else's!)

Comment author: DanielVarga 15 December 2010 11:51:07AM *  1 point [-]

Let me mention that Nate Silver at 538 did something quite similar to your Knox post, just today:

http://fivethirtyeight.blogs.nytimes.com/2010/12/15/a-bayesian-take-on-julian-assange/

By the way, I downvoted the parent. It is nothing personal, but debates like those induced by the Knox post are not what I'd like to see more of here.

Comment author: Louie 14 December 2010 01:46:20AM *  1 point [-]

I want to do an applied Bayesian analysis of what credence I should give to the Sierpinski conjecture being true.

I've been thinking that perhaps the small covering set sizes for known Sierpinski numbers, and the projections on where we expect to find primes (see A61) is enough to be "effectively certain" of the conjecture's truth even without actually having the prime counter-examples in hand. For instance, I feel like I should be able to quantify the value of Bayesian evidence that each primality test contributes to the overall project goal of proving the Sierpinski conjecture. And if I can show that I expect to update in favour of the conjecture's truth after hearing of the results of X more primality tests, then I should also be able to update on that now, right?

Working this out for a problem I'm familiar with might help us get better at analysing the truth of other scientific conjectures in general. But the reason I haven't done this so far is that despite understanding Bayesian reasoning abstractly and the rules about conserving probability, I don't know how to formally select a prior for the analysis. I realise this is probably not a big problem as long as it's not pathologically bad -- can I just say 50% true or maybe 90% since someone smart who I respect went to the trouble to publish a paper saying he believed it? Guess that's not hard, but how do I calculate the value of incremental evidence in favour of the conjecture's truth that has accumulated over the years as more and more possible ways for it to be false have been eliminated?

I picked this because I've been running the distributed computing system trying to solve this problem through brute-force computational means the past 8 years without actually knowing how sure I should be about the actual thing I'm trying to prove. It might be good to know what I'm doing, huh?