Comment author: Larks 17 March 2013 03:27:58PM 1 point [-]

When it comes to "the utility function is not up for grabs", we should jetison hyperbolic discounting far before we reject the idea that I'm the same agent now as in one second's time.

Comment author: xv15 17 March 2013 08:03:45PM *  8 points [-]

We can't jettison hyperbolic discounting if it actually describes the relationship between today-me and tomorrow-me's preferences. If today-me and tomorrow-me do have different preferences, there is nothing in the theory to say which one is "right." They simply disagree. Yet each may be well-modeled as a rational agent.

The default fact of the universe is that you aren't the same agent today as tomorrow. An "agent" is a single entity with one set of preferences who makes unified decisions for himself, but today-you can't make decisions for tomorrow-you any more than today-you can make decisions for today-me. Even if today-you seems to "make" a decision for tomorrow-you, tomorrow-you can just do something else. When it comes down to it, today-you isn't the one pulling the trigger tomorrow. It may turn out that you are (approximately) an individual with consistent preferences over time, in which case it's equivalent to today-you being able to make decisions for tomorrow-you, but if so that would be a very special case.

There are evolutionary pressures that encourage agency and exponential discounting in particular. I have also seen models that tried to generate some evolutionary reason for time inconsistency, but never convincingly. I suspect that really, it's just plain hard to get all the different instances of a person to behave as a single agent across time, because that's fundamentally not what people are.

The idea that you are a single agent over time is an illusion supported by inherited memories and altruistic feelings towards your future selves. If you all happen to agree on which one of you should get to eat the donut, I will be surprised.

Comment author: xv15 05 March 2013 06:10:09PM *  15 points [-]

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."

Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?

There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.

Comment author: xv15 06 March 2013 07:22:51PM 2 points [-]

Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: *the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) *the false positive and false negative rates of a test.

The program would spit out the probability of having the disorder given positive and negative test results.

Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure.

Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor's prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn't trust that particular doctor anyway.

Comment author: [deleted] 05 March 2013 07:29:14PM *  5 points [-]

If I understand the following Wikipedia page correctly:

http://en.wikipedia.org/wiki/Positive_predictive_value

The term you are requesting is Positive predictive value and Negative predictive value is the term for not having a disorder given a negative test result.

It also points out that these are not solely dependent on the test, and also require a prevalence percentage.

But that being said, you could require each test to be reported with multiple different prevalence percentages:

For instance, using the above example of Downs Syndrome, you could report the results by using the prevalence of Downs Syndrome at several different given maternal ages. (Since prevalence of Down's Syndrome is significantly related to maternal age.)

In response to comment by [deleted] on MetaMed: Evidence-Based Healthcare
Comment author: xv15 06 March 2013 02:54:34AM 0 points [-]

thanks, PPV is exactly what I'm after.

The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load.

One concern I didn't consider before is that many doctors would probably resist reporting PPV's to their patients because they are currently recommending tests that, if they actually admitted the PPV's, would look ridiculous! (e.g. breast cancer screening).

Comment author: CCC 05 March 2013 06:32:07PM 7 points [-]

This stops working in the case where some of the people upstream can't be trusted. Consider the following statement:

"The previous test, if you have a positive result, means that the baby has a 25% chance of having Down syndrome, according to the manufacturer. But my patented test will return a positive result in 99% of cases in which the baby has Down syndrome."

Comment author: xv15 05 March 2013 07:15:56PM 1 point [-]

"False positive rate" and "False negative rate" have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates?

Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.

Comment author: xv15 05 March 2013 06:10:09PM *  15 points [-]

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."

Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?

There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.

Comment author: xv15 11 February 2013 04:34:07PM 15 points [-]

Closeness in the experiment was reasonably literal but may also be interpreted in terms of identification with the torturer. If the church is doing the torturing then the especially religious may be more likely to think the tortured are guilty. If the state is doing the torturing then the especially patriotic (close to their country) may be more likely to think that the tortured/killed/jailed/abused are guilty. That part is fairly obvious but note the second less obvious implication–the worse the victim is treated the more the religious/patriotic will believe the victim is guilty. ... Research in moral reasoning is important because understanding why good people do evil things is more important than understanding why evil people do evil things.

-Alex Tabarrok

Comment author: lukeprog 04 November 2012 01:25:48AM -2 points [-]

We run the risk of going extinct, and the irony is, we did it to ourselves. The ‘smarty pants’ brain that created advanced weapons, complex global economics [and more] is routinely bossed around by the brain that shoots from the hip, makes often terrible decisions, and reacts more to fear and greed than to reason....

No one in their right mind would deliberately create the means of their own extinction, but that’s what we seem to be doing. The only conclusion is that we’re not in our right minds...

K.C. Cole

Comment author: xv15 07 November 2012 12:26:09AM 13 points [-]

I dislike this quote because it obscures the true nature of the dilemma, namely the tension between individual and collective action. Being "not in one's right mind" is a red herring in this context. Each individual action can be perfectly sensible for the individual, while still leading to a socially terrible outcome.

The real problem is not that some genius invents nuclear weapons and then idiotically decides to incite global nuclear war, "shooting from the hip" to his own detriment. The real problem is that incentives can be aligned so that it is in everyone's interest every step along the way, to do their part in their own ultimate destruction.

Of course, if "right mind" was defined to mean "socially optimal mind," fine, we aren't in our right mind. But I don't think that's the default interpretation.

Comment author: xv15 12 July 2012 07:07:50PM 1 point [-]

This post, by its contents and tone, seems to really emphasize the downside of signaling. So let me play the other side.

Enabling signaling can add or subtract a huge amount of value from what would happen without signaling. You can tweak your initial example to get a "rat race" outcome where everyone, including the stupid people, sends a costly signal that ends up being completely uninformative (since everyone sends it). But you can also make it prohibitively mentally painful for stupid people to go to college, versus neutral or even enjoyable for smart people (instead of there being an actual economic cost of engaging in signaling), with a huge gain to employers for being able to tell them apart.

one can look at Nikolai Roussanov's study on how the dynamics of signaling games in US minority communities encourage conspicuous consumption and prevent members of those communities from investing in education and other important goods.

As a counterpoint to this, in other cases the signaling value of education may induce people to get more education than is individually optimal, which is actually a good thing socially if you think education has large positive externalities. And if you work hard and discover a cure for cancer, you will be paid largely through other people's opinions of you, now that you've signaled to them that you are such an intelligent and hard-working and socially-conscious person. (You were just as intelligent before you cured it, but now they know). Since you cannot possibly hope to recoup even a modest fraction of the social value you will have created, that's unambiguously good for incentives.

On any other site, I would probably get away with saying: Since invention is basically the reason for our high modern standards of living, if signaling seriously encourages it, then in the long run the positive value of signaling would seem to dwarf any losses discussed above (even the "poverty" of some minority communities is nothing compared to the poverty in all of our shared historical past). But here...well, here we are pretty worried about where our invention spree might be leading us.

In response to Biased Pandemic
Comment author: xv15 14 March 2012 05:12:53AM 11 points [-]

This sounds awesome. It would be really cool if you could configure it so that identifying biases actually helps you to win by some tangible measure. For example, if figuring out a bias just meant that person stopped playing with bias (instead of drawing a new bias), figuring out biases would be instrumental in winning. The parameters could be tweaked of course (if people typically figure out the biases quickly, you could make it so they redraw biases several times). Or you could link drawing additional biases with the drawing of epidemic cards?

I have this terrifying vision of a version where it is biases -- not diseases -- which spread throughout the world, and whenever a player's piece is in a city infected with a certain bias, they have to play with it...

Comment author: lukeprog 13 January 2012 02:19:01AM *  16 points [-]

Now that I'm Executive Director I don't have much time to bang my head on hard (research) problems, though I did start doing that a while back.

This is a "merely" inspirational post, but I think there's room for that on LW. There isn't much new insight in A sense that more is possible, either, but I found it valuable.

Comment author: xv15 13 January 2012 03:21:27AM 23 points [-]

Luke, I thought this was a good post for the following reasons.

(1) Not everything needs to be an argument to persuade. Sometimes it's useful to invest your limited resources in better illuminating your position instead of illuminating how we ought to arrive at your position. Many LWers already respect your opinions, and it's sometimes useful to simply know what they are.

The charitable reading of this post is not that it's an attempted argument via cherry-picked examples that support your feeling of hopefulness. Instead I read it as an attempt to communicate your level of hopefulness accurately to people who you largely expect to be less hopeful. This is an imprecise business that necessarily involves some emotional language, but ultimately I think you are just saying: do not privilege induction with such confidence, we live in a time of change.

It might quell a whole class of complaints if you said something like that in the post. Perhaps you feel you've noticed a lot of things that made you question and revise your prior confidence about the unchangingness of the world...if so, why not tell us explicitly?

(2) I also see this post as a step in the direction of your stated goal to spend time writing well. It seems like something you spent time writing (at least relative to the amount of content it contains). Quite apart from the content it contains, it is a big step in the direction of eloquence. LWers are programmed to notice/become alarmed when eloquence is being used to build up a shallow argument, but it's the same sort of writing whether your argument is shallow or deep. This style of writing will do you a great service when it is attached to a much deeper argument. So at the least it's good practice, and evidence that you should stick with your goal.

View more: Prev | Next