Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Hi Michael,

Thanks for writing this! I'm glad to see my post getting engagement, and I wish I'd joined the discussion here sooner.

I feel like my argument got a strawmanned (and I don't think you did that intentionally). I fully agree with this bit:

"Methods like the above will result in better probability estimates than if we acted as though we knew nothing at all."

I think it's entirely reasonable for someone to say: "I feel safe walking out the door because I think there's an extremely low probability that Zeus will strike me down with a thunderbolt when I walk outside."

What I object to is the idea that reasonable people do (or in some sense ought to) make sense of all uncertainty in terms of probability estimates. I think combining hazy probability estimates with tools like probabilistic decision theory will generally have bad consequences.

I very much agree with Dagon's comment:

Models are maps. There's no similarity molecules or probability fields that tie all die rolls together. It's just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we're considering, they share some abstract properties in our models.