John_Maxwell_IV comments on Leaving LessWrong for a more rational life - Less Wrong

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: Valentine 23 May 2015 04:19:22PM 9 points [-]

If a thought experiment shows something to not feel right, that should raise your uncertainty about whether your model of what is going on is correct or not (notice your confusion), to whit the correct response should be “how can I test my beliefs here?”

I have such very strong agreement with you here.

The problem isn't concept formation by means of comparing similar reference classes, but rather using thought experiments as evidence and updating on them.

…but I disagree with you here.

Thought experiments and reasoning by analogy and the like are ways to explore hypothesis space. Elevating hypotheses for consideration is updating. Someone with excellent Bayesian calibration would update much much less on thought experiments etc. than on empirical tests, but you run into really serious problems of reasoning if you pretend that the type of updating is fundamentally different in the two cases.

I want to emphasize that I think you're highlighting a strength this community would do well to honor and internalize. I strongly agree with a core point I see you making.

But I think you might be condemning screwdrivers because you've noticed that hammers are really super-important.

Comment author: [deleted] 23 May 2015 06:50:03PM *  -1 points [-]

Selecting a likely hypothesis for consideration does not alter that hypothesis' likelihood. Do we agree on that?

Comment author: John_Maxwell_IV 24 May 2015 05:29:16AM 5 points [-]

People select hypotheses for testing because they have previously weakly updated in the direction of them being true. Seeing empirical data produces a later, stronger update.

Comment author: RobbBB 24 May 2015 08:58:00PM *  4 points [-]

Except that when the hypothesis space is large, people test hypotheses because they strongly updated in the direction of them being true, and seeing empirical data produces a later, weaker update. Where an example of 'strongly updating' could be going from 9,999,999:1 odds against a hypothesis to 99:1 odds, and an example of 'weakly updating' could be going from 99:1 odds against the hypothesis to 1:99. The former update requires about 20 bits of evidence, while the latter update requires about 10 bits of evidence.

Comment author: John_Maxwell_IV 25 May 2015 12:07:52PM 1 point [-]

Interesting point. I guess my intuitive notion of a "strong update" has to do with absolute probability mass allocation rather than bits of evidence (probability mass is what affects behavior?), but that's probably not a disagreement worth hashing out.

Comment author: Valentine 24 May 2015 07:36:08PM 1 point [-]

I like your way of saying it. It's much more efficient than mine!

Comment author: John_Maxwell_IV 25 May 2015 11:59:27AM *  2 points [-]

Thanks! Paul Graham is my hero when it comes to writing and I try to pack ideas as tightly as possible. (I recently reread this essay of his and got amazed by how many ideas it contains; I think it has more intellectual content than most published nonfiction books, in just 10 pages or so. I guess the downside of this style is that readers may not go slow enough to fully absorb all the ideas. Anyway, I'm convinced that Paul Graham is the Ben Franklin of our era.)