Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test
Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.
Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."
Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?
There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
When it comes to "the utility function is not up for grabs", we should jetison hyperbolic discounting far before we reject the idea that I'm the same agent now as in one second's time.
We can't jettison hyperbolic discounting if it actually describes the relationship between today-me and tomorrow-me's preferences. If today-me and tomorrow-me do have different preferences, there is nothing in the theory to say which one is "right." They simply disagree. Yet each may be well-modeled as a rational agent.
The default fact of the universe is that you aren't the same agent today as tomorrow. An "agent" is a single entity with one set of preferences who makes unified decisions for himself, but today-you can't make decisions for tomorrow-you any more than today-you can make decisions for today-me. Even if today-you seems to "make" a decision for tomorrow-you, tomorrow-you can just do something else. When it comes down to it, today-you isn't the one pulling the trigger tomorrow. It may turn out that you are (approximately) an individual with consistent preferences over time, in which case it's equivalent to today-you being able to make decisions for tomorrow-you, but if so that would be a very special case.
There are evolutionary pressures that encourage agency and exponential discounting in particular. I have also seen models that tried to generate some evolutionary reason for time inconsistency, but never convincingly. I suspect that really, it's just plain hard to get all the different instances of a person to behave as a single agent across time, because that's fundamentally not what people are.
The idea that you are a single agent over time is an illusion supported by inherited memories and altruistic feelings towards your future selves. If you all happen to agree on which one of you should get to eat the donut, I will be surprised.