ChrisHallquist comments on Willing gamblers, spherical cows, and AIs - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.
Tweaked the wording, is that better? ("Compatible" was a weasel word anyway.)
I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don't explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn't evolve to cope with. See, for example, Cosmides and Tooby 1996.
Thanks. I was not familiar with that hypothesis, will have to look at C&T's paper.