ChrisHallquist comments on Willing gamblers, spherical cows, and AIs - Less Wrong

15 Post author: ChrisHallquist 08 April 2013 09:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChrisHallquist 08 April 2013 11:59:49PM -2 points [-]

It's valuable to know what can happen under adversarial assumptions even if you don't expect those assumptions to hold.

That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.

This isn't strong evidence; you're mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).

Tweaked the wording, is that better? ("Compatible" was a weasel word anyway.)

Comment author: Qiaochu_Yuan 09 April 2013 12:05:57AM *  2 points [-]

Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don't specifically require them is weak.

I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don't explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn't evolve to cope with. See, for example, Cosmides and Tooby 1996.

Comment author: ChrisHallquist 09 April 2013 12:34:37AM -1 points [-]

Thanks. I was not familiar with that hypothesis, will have to look at C&T's paper.