Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses

    Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

    -- George Will, townhall.com

    Sandy is a morbidly obese woman looking for advice.

    Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

    Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

    Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

    When she tells each of her friends about the opinions of the others, things really start to heat up.

    Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

    Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

    Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

    Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

    Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

    The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

    What is Disease?

    In Disguised Queries , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate categories. Calling something a "blegg" is a quick and easy way to describe its color, shape, opacity, texture, and chemical composition. It may be that the odd blegg might be purple rather than blue, but in general the characteristics of a blegg remain sufficiently correlated that "blegg" is a useful word. If they weren't so correlated - if blue objects were equally likely to be palladium-containing-cubes as vanadium-containing-eggs, then the word "blegg" would be a waste of breath; the characteristics of the object would remain just as mysterious to your partner after you said "blegg" as they were before.

    "Disease", like "blegg", suggests that certain characteristics always come together. A rough sketch of some of the characteristics we expect in a disease might include:

    1. Something caused by the sorts of thing you study in biology: proteins, bacteria, ions, viruses, genes.

    2. Something involuntary and completely immune to the operations of free will

    3. Something rare; the vast majority of people don't have it

    4. Something unpleasant; when you have it, you want to get rid of it

    5. Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution.

    6. Something commonly treated with science-y interventions like chemicals and radiation.

    Cancer satisfies every one of these criteria, and so we have no qualms whatsoever about classifying it as a disease. It's a type specimen, the sparrow as opposed to the ostrich. The same is true of heart attack, the flu, diabetes, and many more.

    Some conditions satisfy a few of the criteria, but not others. Dwarfism seems to fail (5), and it might get its status as a disease only after studies show that the supposed dwarf falls way out of normal human height variation. Despite the best efforts of transhumanists, it's hard to convince people that aging is a disease, partly because it fails (3). Calling homosexuality a disease is a poor choice for many reasons, but one of them is certainly (4): it's not necessarily unpleasant.

    The marginal conditions mentioned above are also in this category. Obesity arguably sort-of-satisfies criteria (1), (4), and (6), but it would be pretty hard to make a case for (2), (3), and (5).

    So, is obesity really a disease? Well, is Pluto really a planet? Once we state that obesity satisfies some of the criteria but not others, it is meaningless to talk about an additional fact of whether it "really deserves to be a disease" or not.

    If it weren't for those pesky hidden inferences...

    Hidden Inferences From Disease Concept

    The state of the disease node, meaningless in itself, is used to predict several other nodes with non-empirical content. In English: we make value decisions based on whether we call something a "disease" or not.

    If something is a real disease, the patient deserves our sympathy and support; for example, cancer sufferers must universally be described as "brave". If it is not a real disease, people are more likely to get our condemnation; for example Sandy's husband who calls her a "pig" for her inability to control her eating habits. The difference between "shyness" and "social anxiety disorder" is that people with the first get called "weird" and told to man up, and people with the second get special privileges and the sympathy of those around them.

    And if something is a real disease, it is socially acceptable (maybe even mandated) to seek medical treatment for it. If it's not a disease, medical treatment gets derided as a "quick fix" or an "abdication of personal responsibility". I have talked to several doctors who are uncomfortable suggesting gastric bypass surgery, even in people for whom it is medically indicated, because they believe it is morally wrong to turn to medicine to solve a character issue.

    While a condition's status as a "real disease" ought to be meaningless as a "hanging node" after the status of all other nodes have been determined, it has acquired political and philosophical implications because of its role in determining whether patients receive sympathy and whether they are permitted to seek medical treatment.

    If we can determine whether a person should get sympathy, and whether they should be allowed to seek medical treatment, independently of the central node "disease" or of the criteria that feed into it, we will have successfully unasked the question "are these marginal conditions real diseases" and cleared up the confusion.

    Sympathy or Condemnation?

    Our attitudes toward people with marginal conditions mainly reflect a deontologist libertarian (libertarian as in "free will", not as in "against government") model of blame. In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance. People who make good decisions are intrinsically good people and deserve good treatment; people who make bad decisions are intrinsically bad people and deserve bad treatment. But people who make bad decisions for reasons that are outside of their free will may not be intrinsically bad people, and may therefore be absolved from deserving bad treatment. For example, if a normally peaceful person has a brain tumor that affects areas involved in fear and aggression, they go on a crazy killing spree, and then they have their brain tumor removed and become a peaceful person again, many people would be willing to accept that the killing spree does not reflect negatively on them or open them up to deserving bad treatment, since it had biological and not spiritual causes.

    Under this model, deciding whether a condition is biological or spiritual becomes very important, and the rationale for worrying over whether something "is a real disease" or not is plain to see. Without figuring out this extremely difficult question, we are at risk of either blaming people for things they don't deserve, or else letting them off the hook when they commit a sin, both of which, to libertarian deontologists, would be terrible things. But determining whether marginal conditions like depression have a spiritual or biological cause is difficult, and no one knows how to do it reliably.

    Determinist consequentialists can do better. We believe it's biology all the way down. Separating spiritual from biological illnesses is impossible and unnecessary. Every condition, from brain tumors to poor taste in music, is "biological" insofar as it is encoded in things like cells and proteins and follows laws based on their structure.

    But determinists don't just ignore the very important differences between brain tumors and poor taste in music. Some biological phenomena, like poor taste in music, are encoded in such a way that they are extremely vulnerable to what we can call social influences: praise, condemnation, introspection, and the like. Other biological phenomena, like brain tumors, are completely immune to such influences. This allows us to develop a more useful model of blame.

    The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.

    So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize. Though the rule is based on philosophy that the majority of the human race would disavow, it leads to intuitively correct consequences. Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

    The question "Do the obese deserve our sympathy or our condemnation," then, is asking whether condemnation is such a useful treatment for obesity that its utility outweights the disutility of hurting obese people's feelings. This question may have different answers depending on the particular obese person involved, the particular person doing the condemning, and the availability of other methods for treating the obesity, which brings us to...

    The Ethics of Treating Marginal Conditions

    If a condition is susceptible to social intervention, but an effective biological therapy for it also exists, is it okay for people to use the biological therapy instead of figuring out a social solution? My gut answer is "Of course, why wouldn't it be?", but apparently lots of people find this controversial for some reason.

    In a libertarian deontological system, throwing biological solutions at spiritual problems might be disrespectful or dehumanizing, or a band-aid that doesn't affect the deeper problem. To someone who believes it's biology all the way down, this is much less of a concern.

    Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

    But the most convincing explanation I have read for why so many people are opposed to medical solutions for social conditions is a signaling explanation by Robin Hans...wait! no!...by Katja Grace. On her blog, she says:

    ...the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice. The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people.

    A case in which some people eat less enjoyable foods and exercise hard to avoid becoming obese, and then campaign against a pill that makes avoiding obesity easy demonstrates some of the same principles.

    There are several very reasonable objections to treating any condition with drugs, whether it be a classical disease like cancer or a marginal condition like alcoholism. The drugs can have side effects. They can be expensive. They can build dependence. They may later be found to be placebos whose efficacy was overhyped by dishonest pharmaceutical advertising.. They may raise ethical issues with children, the mentally incapacitated, and other people who cannot decide for themselves whether or not to take them. But these issues do not magically become more dangerous in conditions typically regarded as "character flaws" rather than "diseases", and the same good-enough solutions that work for cancer or heart disease will work for alcoholism and other such conditions (but see here).

    I see no reason why people who want effective treatment for a condition should be denied it or stigmatized for seeking it, whether it is traditionally considered "medical" or not.

    Summary

    People commonly debate whether social and mental conditions are real diseases. This masquerades as a medical question, but its implications are mainly social and ethical. We use the concept of disease to decide who gets sympathy, who gets blame, and who gets treatment.

    Instead of continuing the fruitless "disease" argument, we should address these questions directly. Taking a determinist consequentialist position allows us to do so more effectively. We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

    New Comment
    357 comments, sorted by Click to highlight new comments since: Today at 3:18 PM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    Yvain:

    The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. [...] But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future.

    Or as Oliver Wendell Holmes put it more poignantly:

    If I were having a philosophical talk with a man I was going to have hanged or electrocuted, I should say, "I don't doubt that your act was inevitable for you, but to make it more avoidable by others we propose to sacrifice you to the common good. You may regard yourself as a soldier dying for your country if you like. But the law must keep its promises."

    (I am not a consequentialist, much less a big fan of Holmes, but he sure had a way with words.)

    1torekp14y
    The law must keep its promises? That doesn't sound particularly Utilitarian, or even particularly consequentialist. Deontologists could endorse the focus on the distinction between behaviors that are responsive to praise/blame and those, like the development of cancer, that are not. Or to put it another way, on the distinction between behaviors that are responsive to talk and those that are not. Here, "talk" includes self-talk, which includes much reasoning.

    The law must keep its promises? That doesn't sound particularly Utilitarian, or even particularly consequentialist.

    In this case, the law must "keep its promises" because of what would follow if it turned out that the law didn't actually matter. That's a very consequentialist notion.

    4torekp14y
    I'm just trying to point out that we can agree with a central point of Yvain's post without endorsing consequentialism. For example, Anthony Ellis offers a deontological deterrence-based justification of punishment. The same goes for Holmes's quip, even if in his case it was motivated by consequentialist reasoning. Especially if we take "your act was inevitable for you" to be an (overblown) restatement of the simple fact of causal determination of action.
    2Kaj_Sotala14y
    Oh, right. Yeah, sure - I agree with that.
    -1Psychohistorian14y
    The whole thing is a bit of a half-measure. Even if people's actions are predetermined, and they are not morally accountable for them, we need to hang them anyways because without such an incentive, even more people may be billiard-balled into doing the same things. Of course, this is all quite besides the point. If you swallow billiard-ball determinism hook, line, and sinker, there's really no point in talking about why we do anything, though, as it works out, there's also little point in objecting to people discussing why we do anything, since it's all going to happen just as it happens no matter what. ETA: I'm referring to the commonly misconceived notion of determinism that thinks free will cannot exist because the universe is "merely" physical. I mean "billiard-ball determinism" as something of a pejorative, not as an accurate model of how the universe really works. I'm not claiming that a deterministic universe is incompatible with free will; indeed, I believe the opposite.

    I'm pretty sure this is wrong. A billiard-ball world would still contain reasons and morals.

    Imagine a perfectly deterministic AI whose sole purpose in life is to push a button that increments a counter. The AI might reason as you did, notice its own determinism, and conclude that pushing the button is pointless because "it's all going to happen just as it happens no matter what". But this is the wrong conclusion to make. Wrong in a precisely definable sense: if we want that button pushed and are building an AI to do it, we don't want the AI to consider such reasoning correct.

    Therefore, if you care about your own utility function (which you presumably do), this sort of reasoning is wrong for you too.

    3Psychohistorian14y
    I was evidently unclear. When I say "billiard-ball determinism" I mean the caricature of determinism many people think of, the one in which free will is impossible because everything is "merely" physical. If no decision were "free," any evaluative statement is pointless. It would be like water deciding whether or not it is "right" to flow downhill - it doesn't matter what it thinks, it's going to happen. I agree that this is not an accurate rendition of reality. I just find it amusing that people who do think it's an accurate rendition of reality still find the free-will debate relevant. If there is no free will in that sense, there is no point whatsoever to debating it, nor to discussing morality, because it's a done deal.
    1Psychohistorian14y
    I would also add that this example assumes free will - if the AI can't stop pushing the button, it doesn't really matter what it thinks about its merits. If it can, then free will is not meaningless, because it just used it.

    I'm not sure what exactly you mean by "can't". Imagine a program that searches for the maximum element of an array. From our perspective there's only one value the program "can" return. But from the program's perspective, before it's scanned the whole array, it "can" return any value. Purely deterministic worlds can still contain agents that search for the best thing to do by using counterfactuals ("I could", "I should"), if these agents don't have complete knowledge of the world and of themselves. The concept of "free will" was pretty well-covered in the sequences.

    5Psychohistorian14y
    You're right, but you're not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point. Imagine you have an AI that is capable of "thinking," but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism. My point was that using an incorrect model that decides "there is no free will" is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.
    4cousin_it14y
    Yes, I misunderstood your original point. It seems to be correct. Sorry.
    2adamisom12y
    Psychohistorian disagrees that cousin_it was disagreeing with him. Very cute ;)
    0torekp14y
    Is billiard-ball determinism a particular variant? If so, what does the billiard-ball part add?
    2[anonymous]4y
    Granting a bit of poetic license and interpretive wiggle-room to Holmes: if we are *rule* utilitarians and the (then-current) laws are the utility-maximizing rules and the legal system is tasking with enforcing those rules, enforcing them—keeping the promise—is the rule-utilitarian'ly morally required thing to do. I think that's a utilitarian interpretation that neither does excessive violence to Holmes' quote nor the category or concept of utilitarianism.

    Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness.

    That depends a lot on whether or not the reason they're not working is because they already feel they're worthless... in which case the result isn't likely to be an improvement.

    -7Ivan_Tishchenko14y

    Here's a perfect illustration: Halfbakery discusses the idea of a drug for alleviating unrequited love. Many people speak out against the idea, eloquently defending the status quo for no particular reason other than it's the status quo. I must be a consequentialist, because I'd love to have such a drug available to everyone.

    Thanks for the link-- very entertaining discussion.

    I don't think anyone came out explicitly with the idea that unrequited love works well in some people's lives and badly in others, and people would have their own judgement about whether to take a drug for it.

    Instead, at least the anti-drug contingent reacted as though the existence of the drug meant that unrequited love would go away completely.

    For another example, see The End of My Addiction, a book by a cardiologist who became an alcoholic and eventually found that Baclofen, a muscle relaxant, eliminated the craving and also caused him to quit being a shopoholic. He's been trying to get a study funded to see whether there's solid evidence that high doses of the drug undo addictions, but there isn't sufficient interest. It isn't just that the drug is off patent, it's that most people don't see alcohol craving as a problem in itself.

    He's been trying to get a study funded to see whether there's solid evidence that high doses of the drug undo addictions, but there isn't sufficient interest.

    There are a few randomized trials of baclofen, if those count:

    • Addolorato et al. 2006. 18 drinkers got baclofen, 19 got diazepam (the 'gold standard' treatment, apparently). Baclofen performed about as well as diazepam.

    • Addolorato et al. 2007.61814-5) 42 drinkers got baclofen, 42 got a placebo. More baclofen patients remained abstinent than placebo patients, and the baclofen takers stayed abstinent longer (both results were statistically significant).

    • Assadi et al. 2003. 20 opiate addicts got baclofen, 20 got a placebo. (Statistically) significantly more of the baclofen patients stayed on the treatment, and lessened depressive & withdrawal symptoms. The baclofen patients also did insignificantly better on 'opioid craving and self-reported opioid and alcohol use.'

    • Shoptaw et al. 2003. 35 cokeheads got baclofen, 35 a placebo. 'Univariate analyses of aggregates of urine drug screening showed generally favorable outcomes for baclofen, but not at statistically significant levels. There was no statistical significance obs

    ... (read more)
    9NancyLebovitz14y
    Thanks for looking this up. Unless I've missed something, only the third study might have used baclofen in high enough dosages to test Arneisen's hypothesis. From his FAQ:
    3cupholder14y
    Good point. I hadn't read Ameisen's FAQ; I just went running off to Google Scholar as soon as I read your comment. I might come back later and see what doses were used in each of those studies and see whether the studies with higher doses had more positive results.
    1TropicalFruit3y
    I think this is actually a case where the status quo bias shines, and the natural pull towards conservatism may be warranted. Do we have any way of reasonably predicting the social effects of curing unrequited love? What if it’s serving some behavioral or mating function that’s critical to the social order? I don’t think it’s appropriate to casually mess with complex systems like human mating, getting rid of something as widespread as unrequited love because it’s uncomfortable. That seems dangerous, and it seems like caution is the correct approach with systems that broad.

    Someone once quipped about a Haskell library that "You know it's a good library when just reading the manual removes the problem it solves from your life forever." I feel the same way about this article. That's a compliment, in case you were wondering.

    The one criticism I would make is that it's long, and I think you could spread this to other sites and enlighten a lot of people if you wrote an abridged version and perhaps illustrated it with silly pictures of cats.

    Thank you very much. That's exactly the feeling I hoped people would have if this dissolved the question and it's great to hear.

    I can't think of how to make this shorter without removing content (especially since this is already pitched at an advanced audience - anything short of LW and I'd have to explain status quo biases, preference reversal tests, and actually justify determinism).

    I can, however, give you an lolcat if you want one.

    9rwallace14y
    I ran a Google search for the line you quoted, but no results; I'd be interested to know what the original author meant by it, I don't suppose you have any links handy?
    9sketerpot14y
    It was on a wiki page that was lost in a shuffle years ago. HOWEVER! I managed to track down a copy of the page, and hosted it myself. Here's the one I was paraphrasing: It's a pretty funny quotes page, if you like Haskell. And I wouldn't feel right if I didn't include my favorite thing from that page, concerning the proper indentation of C code: Having fought far too many segfaults, and been irritated by the lack of common data structures in libc, I can only agree.
    1phob14y
    Thank you for this.
    1rwallace14y
    Thanks! excellent reading.

    This is a really interesting post and I will most likely respond on my own blog sometime. In the meantime, I haven't read the whole comment thread, but I don't think this article has been linked yet (I did search for the title): http://www.nytimes.com/2010/01/10/magazine/10psyche-t.html?pagewanted=all

    It's called "The Americanization of Mental Illness". Definitely worth a read; in particular, here is an excellent quotation:

    It turns out that those who adopted biomedical/genetic beliefs about mental disorders were the same people who wanted less contact with the mentally ill and thought of them as more dangerous and unpredictable. This unfortunate relationship has popped up in numerous studies around the world. In a study conducted in Turkey, for example, those who labeled schizophrenic behavior as akil hastaligi (illness of the brain or reasoning abilities) were more inclined to assert that schizophrenics were aggressive and should not live freely in the community than those who saw the disorder as ruhsal hastagi (a disorder of the spiritual or inner self). Another study, which looked at populations in Germany, Russia and Mongolia, found that “irrespective of place . . . ... (read more)

    0clarissethorn14y
    Also: I recently saw a list of diseases ranked by doctors from most to least stigmatized; the list was accompanied by analysis that claimed that more respected doctors work on less stigmatized illnesses. I saw it on the Internet but alas, I can't find it now. I did find this, though: http://healthpolicy.stanford.edu/news/internet_use_can_help_patients_with_stigmatized_illness_study_finds_2006127/

    Perhaps I'm misunderstanding, but

    There are several very reasonable objections to treating any condition with drugs, whether it be a classical disease like cancer or a marginal condition like alcoholism. The drugs can have side effects. They can be expensive. They can build dependence. They may later be found to be placebos whose efficacy was overhyped by dishonest pharmaceutical advertising.. They may raise ethical issues with children, the mentally incapacitated, and other people who cannot decide for themselves whether or not to take them. But these issues do not magically become more dangerous in conditions typically regarded as "character flaws" rather than "diseases", and the same good-enough solutions that work for cancer or heart disease will work for alcoholism and other such conditions.

    seems to summarise to:

    (1) Medical treatments (drugs, surgery, et cetera) for conditions that can be treated in other ways can have negative consequences. (2) But so do those for conditions without other treatments and we use those. (3) Therefore: we should not object to these treatments on the grounds of risks.

    I'd question the validity of this argument. Consider a sc... (read more)

    If I understand you right, you're saying that allowing drugs might discourage people from even trying the willpower-based treatments, which provides a cost of allowing drugs that isn't present in diseases without a willpower-based option.

    It's a good point and I'm adding it to the article.

    Sort-of nitpick:

    The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences.

    I would say "utilitarians" rather than "consequentialists" here; while both terms are vague, consequentialism is generally more about the structure of your values, and there's no structural reason a consequentialist (/ determinist) couldn't consider it desirable for blameworthy people to be punished. (Or, with regard to preventative imprisonment of innocents, undesirable for innocents to be punished, over and above the undesirability of the harm that the punishment constitutes.)

    I installed a mental filter that does a find and replace from "utilitarian" to "consequentialist" every time I use it outside very technical discussion, simply because the sort of people who don't read Less Wrong already have weird and negative associations with "utilitarian" that I can completely avoid by saying "consequentialist" and usually keep the meaning of whatever I'm saying intact.

    Less Wrong does deserve better than me mindlessly applying that filter. But you'd need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it. I'm not sufficiently convinced enough to edit the article, though I'll try to be more careful about those terms in the future.

    7thomblake14y
    I, for what it's worth, think this is a good heuristic.
    6utilitymonster14y
    I'm not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you'd have to specify a number of parameters here, but it isn't clear that you'd need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.) Also, I don't think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don't break promises in a way that I don't have a duty to see to it that you don't break promises. They wouldn't be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don't think it is all that interesting personally), googling "Consequentializing Moral Theories" should get you in touch with some of the relevant philosophy.
    2utilitymonster14y
    Agreed. (Though I agree with the general structure of your post.) A better name for your position might be "basic desert skepticism". On this view, no one is intrinsically deserving of blame. One reason is that I don't think the determinism/indeterminism business really settles whether it is OK to blame people for certain things. As I'm sure you've heard, and I'd imagine people have pointed out on this blog, the prospects of certain people intrinsically deserving blame, independently of benefits to anyone, are not much more cheering if everything they do is a function of the outcome of indeterministic dynamical laws. Another reason is that you can have very similar opinions if you're not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.
    4Nick_Tarleton14y
    Hmm? There's no reason a consequentialist can't be extra concerned about his own welfare. (Did I misunderstand this?)
    1utilitymonster14y
    Well, you clearly could be extra concerned about your own welfare because it is instrumentally more valuable than the welfare of others (if you're happy you do more good than your neighbor, perhaps). Or, you could be a really great guy and hold the view that it's good for great guys to be extra happy. But I was thinking that if you thought that your welfare was extra important just because it was yours you wouldn't count as a consequentialist. As I was mentioning in the last post, there's some controversy about exactly how to spell out the consequentialist/non-consequentialist distinction. But probably the most popular way is to say that consequentialists favor promoting agent-neutral value. And thinking your welfare is special, as such, doesn't fit that mold. Still, there are some folks who say that anyone who thinks you should maximize the promotion of some value or other counts as a consequentialist. I think this doesn't correspond as well to the way the term is used and what people naturally associate with it, but this is a terminological point, and not all that interesting.
    5Vladimir_Nesov14y
    Consequentialism doesn't work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included. I'd say it's the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.
    1utilitymonster14y
    I hope I didn't say anything to make you think I disagree with this. I noted that there might be instrumental reasons to care about yourself extra if you're a consequentialist. But how could one outcome be better than another, just because you, rather than someone else, received a greater benefit? Example: you and another person are going to die very soon and in the same kind of way. There is only enough morphine for one of you. Apart from making one of your deaths less painful, there will be nothing relevant hangs on who gets the morphine. I take it that it isn't open to the consequentialist to say, "I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it."
    2Vladimir_Nesov14y
    Your preference is not identical with the other person's preference. You prefer to help yourself more than the other person, and the other person similarly. There is no universal moral. (You might want to try the metaethics sequence.)
    1utilitymonster14y
    Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can't make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa). I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn't show that consequentialists can care extra about themselves, as such. My “consequentialist” said: Yours identifies a different reason. He says, "I should get the morphine. This is because there would be more total preference satisfaction if I did this." This is a purely agent-neutral view. My “consequentialist” is different from your consequentialist. Mine doesn't think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn't think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others. You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you're describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as
    5jimrandomh14y
    I don't think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.
    0utilitymonster14y
    This is where things get murky. The traditional definition is this: Consequentialism: an act is right if no other option has better consequences You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you're really special, you end up thinking that the relevant sense of "better" is relative to an agent. So some people defend a view like this: Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs. When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we'd do better to stick with this definition: Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v. I don't think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
    6jimrandomh14y
    Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others. Here's a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
    0utilitymonster14y
    Yeah, the objection wasn't supposed to be that because there was an implausible consequentialist view on that definition of "consequentialism", it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn't really worth making.

    Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

    Now that's a good use of the reversal test!

    I remember being in a similar argument myself. I was talking with someone about how I had (long ago!) deliberately started smoking to see if quitting would be hard [1], and I found that, though there were periods where I'd had cravings, it wasn't hard to distract myself, and eventually they went away and I was able to easily quit.

    The other person (who was not a smoker and so probably didn't take anything personally) said, "Well, sure, in that case it's easy to quit smoking, because you went in with the intent to prove it's easy to quit. Anyone would find it easy to stay away from cigarettes in that case!"

    So I said, "Then shouldn't that be the anti-smoking tactic that schools use? Make all students take up smoking, just to prove they can quit. Then, everyone will grow up with the ability to quit smoking without much effort."

    [1] and many, many people have told me this is insane, so no need to remind me

    I met someone who started smoking for the same reason you did once and is still addicted, so you couldn't have been at that much of an advantage.

    I am torn between telling you you're insane and suggesting you take up crack on a sort of least convenient possible world principle.

    5SilasBarta14y
    Eh, I don't claim to be immune from addiction and addiction-like cravings. It's just that, AFAICT, I can only get addicted (in the broader sense of the term) to legal stuff. See this blog post for further information. I still struggle with e.g. diet and excessive internet/computer usage. And, in fairness, maybe I needed to smoke more to make it a meaningful test, though I did get to the point where I had cravings.
    7NancyLebovitz14y
    Your experiment seems to me to prove less than you'd hope about people in general-- afaik there's metabolic variation in how people react to nicotine withdrawal.
    6gwern14y
    I'm afraid I don't have anywhere near as awesome a personal story as that; I can say that my family seems to have a tradition of making kids drink some beer or alcohol a few times, though, and it seems to work.
    2SilasBarta14y
    Right, because no one actually likes the taste of alcohol, nor the inhalation of smoke; and then eventually they decide to take up drinking, or smoking, because of the psychoactive effects such as relaxation, loss of inhibitions, or getting high. Just kidding, I'm not starting that debate again! ;-)
    3mindviews14y
    I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.) I think the problem in the example is that it mixes the axes for our preferences for people to have personal responsibility and our preferences for people not to be addicted to heroin. So we have a space with at least these two dimensions. But I'll claim that personal responsibility and heroin use are not orthagonal. I think the real argument is in the coupling between personal responsibility and heroin addiction. Should we have more coupling or less coupling? The drug in this example would make for less coupling. So let's do a preference reversal test and see if we had a drug that made your chances of heroin addiction more coupled to your personal responsiblity, would you take that? I think that would be a valid preference reversal test in this case if you think the current coupling is a local optimum.

    "But the most convincing explanation I have read for why so many people are opposed to medical solutions for social conditions is a signaling explanation by Robin Hans...wait! no!...by Katja Grace."

    Yeah! The hell with that Robin Hanson guy! He's nothing but a signaller trying to signal that he's better than signalling by talking about signals!

    I am so TOTALLY not like that.

    ;)

    Great article, by the way; I just can't resist metahumour though.

    I recently wrote a blog article arguing that 95% of psychology and psychiatry is snake-oil and pseudoscience; primarily I was directing my ire at the incoherency of much of it, but I had the implicit premise of dismissing the types of 'conditions' you wrote about as pathologizing the mundane.

    While on the one hand, I object to much of classifying these conditions as such - if the government ever manages to mindprobe me I know they'll classify me as an alcoholic paranoid with schizoid tendencies (something that I see nothing wrong with), you present a powerful argument of "Hey, if it works, what's wrong with that?" (The day they invent a workout pill, is the day I stop going for bloody stupid jogs.)

    I'd wager that most people her... (read more)

    We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

    I think you said it better earlier when you talked about whether the reduction in incidence outweighs the pain caused by the tactic. For some conditions, if it wasn't for the stigma there would be little-to-nothing unpleasant about it (and we wouldn't need to talk about reducing incidence).

    I agree with your general principle, ... (read more)

    Very good article. One thing I'd like to see covered are conditions that are "treatable" with good lifestyle choices, but whose burden is so onerous that no one would consider them acceptable. Let's say you have a genetic condition which causes you to gain much more weight (5x, 10x - the number is up to the reader) than a comparable non-affected person. So much that the only way you can prevent yourself from becoming obese is to strenuously exercise 8 hours a day. If a person chooses not to do this, are they really making a "bad" choice... (read more)

    If there's some cure for the genetic condition, naturally I'd support that. Otherwise, I think it would fall under the category of "the cost of the blame is higher than the benefits would be." It's not part of this person's, or my, or society's, or anyone's preferences that this person exercise eight hours a day to keep up ideal weight, so there's no benefit to blaming them until they do.

    As for the second example, regarding "is it still right to hold someone so treated /morally/ responsible for doing poorly in their life", this post could be summarized as "there's no such thing as moral responsibility as a primitive object". These people aren't responsible if they're poor, just like a person with a wonderful childhood isn't responsible if they're poor, but if we have evidence that holding them responsible helps them build a better life, we might as well treat them as responsible anyway.

    (the difference, I think, is that we have much more incentive to help the person with the terrible childhood, because one could imagine that this person would respond well to help; the person with the great childhood has already had a lot of help and we have no reason to think that giving more will be of any benefit)

    I agree on the cause of genetic obesity, but my answer may be different for the case of an extremely impoverished childhood. Part of my response is reflected in the fact that neither I (nor anyone I personally know) grew up in that level of poverty so that in imagining the poverty situation I have to counter-factually modify the world and I'm not sure how to do it.

    In one imaginary scenario I would find someone facing facing malnutrition, violently abusive parents, mental retardation, in an environment with no effective police services in the actual world and imagine myself helping them from a distance as a stranger. This is basically "how to help the comprehensively poor as an external intervention". There are a lot of people like this on the planet and helping them is a really hard problem that is not very imaginary at all. I don't think I have any kind of useful answer that fits in this space and meshes with the themes in the OP.

    A second imaginary scenario would be that I am also in the same general situation but only slightly better off. Perhaps there is rampant crime and poverty but my parents gave me minimally adequate nutrition and they weren't abusive (yet I m... (read more)

    4[anonymous]14y
    I think that's the best model for semi-voluntary problems -- it's usually not the case that no amount of effort would solve them, but that they would need much more effort than the average person. People with poor parents can become rich, but they have to work much harder than people with rich parents for the same result. If you're doing the same amount of work as an average person, I'd say you deserve as much credit as an average person.
    2soreff14y
    Excellent point. This can even be made considerably stronger: The whole health care debate was about ~15% of our economy (I'm writing from the U.S.). For any given individual, working a 40 hour week, the equivalent cost would be to burden them with ~15% of their working hours with some lifestyle choice (whether 6 hours per week of exercise or some other comparably time consuming action). Lifestyle changes can be damned expensive in terms of opportunity costs.
    -2dhill14y
    ...also everything may be a problem and an opportunity. You could consider yourself lucky, if you wanted to become a body-builder. Some quirks can actually become an advantage. I would say a real solution (when available) is more robust than hiding from a problem (of wrong perception).
    4AspiringKnitter12y
    The OP probably meant adipose tissue, not muscle.
    -1A1987dM12y
    A sumo wrestler?

    Great post.

    We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

    I think that this rule contains the sub-rule "condemn conditions such that people are aware of the actions that lead to them" almost all the time, because our condemnation cannot possibly create positive externalities otherwise. It's similar to how jails represent no deterrence if you don't know what actio... (read more)

    You're homing in on the one fuzzy spot in this essay that jumped out at me, but I don't think you're addressing it head on because you (as well as Yvain) seem to be assuming that there are, in point of fact, many situations where condemnation and lack of sympathy will have net positive outcomes.

    Yvain wrote:

    Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

    It seems to me that there are a minuscule number of circumstances where yelling insults that fall afoul of the fundamental attribution error is going to have positive consequences taking everything into account.

    1. In general, people do things that are logical reactions to their environments, given their limited ti

    ... (read more)

    It seems to me that there are a minuscule number of circumstances where yelling insults that fall afoul of the fundamental attribution error is going to have positive consequences taking everything into account.

    I got the impression from OP that the "condemned condition vs. disease" dichotomy primarily manifests itself as society's general attitudes, a categorization that determines people's modes of reasoning about a condition. I think the Sandy example was exaggerated for the purpose of illustration and Yvain probably does not advocate yelling insults in real life.

    If someone is already in a a woeful condition it is unlikely that harsh treatment does any good, for all the reasons you wonderfully wrapped up. But nonetheless an alcoholic has to expect a great deal of silent and implied condemnation and a greatly altered disposition towards him from society - a predictable deterrence. Another very important factor is the makeup of the memepool about alcoholism. If the notion that drinking leads to "wrecking one's life" and "losing human dignity" thoroughly permeates society, an alcoholic candidate may be more likely to attempt overcoming their addict... (read more)

    Very late reply here, but

    Sandy's sister was starting from the place Yvain's article left off - having dissolved the kind of shallow disagreement between the men, she had probably moved into her personal toolbox for actually helping her sister process an emotionally complex situation that was likely to pose serious problems in finding and executing the right strategy in the face of hostile epistemic influences and possible akrasia if she started feeling really guilty for enjoying food and carrying a few extra pounds. Politicizing the issues and "blaming society" isn't without costs or failure modes, but it can help with some people get out of "guilt mode" and start using their brain.

    Note that Sandy's sister started with a examination of the personal choices available to Sandy, the information sources available to her, and the incentives and goals of the people offering the various theories. I assume that after Sandy got into an emotionally safe context to talk about her issues, there's a chance she would decide to do something in her power to change course and decrease her weight.

    It is not my experience that people who support obesity as a valid life choice and decry "fat-ism" as akin to sexism and racism tend to take this next step.

    Excellent article, though there is a point I'd like to see adressed on the topic.

    One salient feature of these marginal, lifestyle-relaed conditions is the large number of false positives that comes with diagnosis. How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval? Similarly, how many are diagnosed by their peers (He's so mopey, he must be depressed) and possibly come to believ... (read more)

    0Vulture12y
    "How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?" But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?
    0Vulture12y
    "How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?" But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?

    The graph image is broken. Does anyone have a copy of the image file? I remember what it looked like, and it was super-useful for demonstrating the concept.

    2Vaniver7y
    I predict Yvain still has one, since I think Raikoth was his personal site. Odds are high the site is temporarily down and it'll be fixed, but I'll ping him.
    0arundelo7y
    If he's let the raikoth.net domain lapse intentionally (maybe to minimize the amount of old stuff by him on the internet) I hope he'll consider renewing it just so he can host a permissive robots.txt. This way the rest of raikoth.net will no longer be visible to casual internet searchers but will still be available on the Internet Archive's Wayback Machine (which it will not if someone else buys the domain and puts up a restrictive robots.txt).
    2btrettel7y
    I spidered his site with wget at one point. I'd be happy to provide a copy to anyone who wants it, but I'm afraid wget did not get everything, e.g., the image in question here would probably not have been found by wget.
    1btrettel7y
    A copy is available from the Internet Archive.

    The condition of rarity does not appear to be a necessary condition for a disease. If 90% of the population had AIDS, AIDS would still be a disease. Or the flu, or gonorrhea. Perhaps, "It needs to be something where, if everyone had it, it would still be called a disease" is the point you're aiming for. Plenty of psychiatric "problems" are problems principally because they go against current social norms - this is why homosexuality was previously classified as a disease - and it seems like that's what you're going for. I think this issue may already be covered in your non-normal distribution condition, which is brilliant.

    None of the conditions are absolutely necessary. On the other hand, the rarity condition is at least as important as the others. If all people had a third functional hand, nobody would think it was a disease. But now virtually all people have two hands and most of the hypothetical three-handers would opt to surgically remove the superfluous limb, even if the third hand can be useful to perform several jobs.

    Or more realistically, almost all people have the appendix, which is of no use, except it can host appendicitis. If only 1% of people had the appendix, I am pretty sure that having appendix would be classified as potentially life-threatening congenital disease.

    As for your example, if 99% of people has AIDS since time immemorial, are you sure it would be classified as disease? People would have weaker immunity and die younger than they do today - that's all difference. Now we die at 75, with few long-livers who manage to remain healthy up to 90 and die at 100. In the AIDS-permeated society we would die at 25, and those few without AIDS who would manage it to their 50 or 70 would be viewed as anomalies.

    A very interesting article that made me think. I am not sure exactly where my thoughts line up with yours, so this will be primarily a means of clarifying what I think.

    It seems to me that the entire purpose of framing obesity as a disease is a means to deflect the "blame" for obesity elsewhere. The disease-ness alone may not be the entire issue.

    For example:

    Person A bothers morbidly obese person B about trying to lose weight.

    Person B says that obesity is a disease and not her fault.

    Person A objects to obesity being a disease, in their mind per... (read more)

    7Blueberry14y
    Actually, many human behaviors like smoking and exposure to sunlight can cause cancer.
    2Sly14y
    Ah true. I had something like brain cancer in mind when I wrote this. But yes, lung cancer in smokers would also fall into the second category.
    -2ocr-fork14y
    Regret doesn't cure STDs.
    7JenniferRM14y
    I don't understand why this is downvoted to (as of my writing) -2. This actually seemed like a pithy response that raised a fascinating twist on the general model. It was a response to Sly saying: The interesting part is that a given state can have different "choice and willpower" requirements for getting in versus getting out. This gets you back into the situation described by Holmes of punishing people in order to discourage other people from following their initial behavior, even (in the case of STDs) in the face of the inability of the punished person to "regret their way to a cure" once they've already made the mistake because they actually are infected with an "external agent" that meets basically all the criteria for disease that Yvain pointed out in the OP.
    2JGWeissman14y
    I downvoted it because I saw it as an irrelevent response to a claim nobody made or implicitly relied on. I was already interpreting Sly's comment in terms of "the situation described by Holmes of punishing people in order to discourage other people from following their initial behavior".

    I think someone read your article: http://www.theatlantic.com/magazine/print/2011/07/the-brain-on-trial/8520/

    He comes at it from a slightly different angle - the criminal justice system - but approaches it the same way, dissolving the question down to blameworthiness and free will. He also reaches the same conclusion; our reaction as a society should be based on influencing future outcomes, not punishing past actions.

    0antz5612y
    There are so many observant writers who have written about the topic: to be sympathized vs. to be condemned. To me, human right violation comes in with two forms, i.e. either curbing personal freedom by portraying it as criminal behaviors (condemned) or as sickness (sympathized), neither way is acceptable. "A die for not to be obedience might be the only choice, aligned with 'that half naked man.'" My learning/take away: humans’ vendibility, one "When the going gets tough, the tough get going"
    0orbenn12y
    There's a book to this effect: http://www.amazon.com/gp/product/0691142084/ref=oh_o03_s01_i01_details A little googling will bring up some very convincing lectures on the subject by the author. Unfortunately he hasn't made many headlines or much headway in actually implementing these ideas.

    This is, quite obviously, a terrific article. One major quibble: your conclusion is rather circular. You assume a consequentialist utilitarian ethics, and then conclude, "Therefore, the optimal solution is to maximize the outcome under consequentialist utilitarian ethics!" I'm not sure it's actually possible to avoid such circularity here, but it does feel a little unsatisfying to me.

    On top of this, your dismissal of the "personal development" issue is a bit hand-wavy. That is, it's one thing if I make a decision to go smoke crack - the... (read more)

    1ChrisHibbert14y
    I don't believe much in penance. (The dictionary I checked said "self punishment as a sign of repentance". I don't think either aspect is valuable.) It's not related to the question of how we should treat people when they have conditions that are often under voluntary control. We should convince them that (assuming they agree that it would be better to not have the condition) their best approach is to accept that the condition is at least partially under voluntary control, that control always appears hard, and therefore to change their lifestyle so as to address the problem. If they agree that the condition is a problem, and they find a magic bullet to solve the problem, then no penance is required. If there's no magic bullet, then they can try to change their lifestyle, but there is no need to for them to punish themselves for not understanding the situation before.

    Great Post!

    Anyway, on to the obligatory quibble. "throwing biological solutions at spiritual problems might be disrespectful or dehumanizing, or a band-aid that doesn't affect the deeper problem" The 6 criteria for disease, including 'biological' in so far as that means caused by biological processes simple enough to understand relatively easily and confidently, do seem to me to each provide weak evidential support for any given treatment not being disrespectful, dehumanizing, or superficial. They also seem to provide weak evidence against the l... (read more)

    6Scott Alexander14y
    You'll have to explain that more. I would have said that "dehumanizing" and "disrespectful" are meaningless weasel words in the context of someone freely choosing to take a drug. Disrespect needs a victim, and I'm wary of the idea of being disrespectful to yourself.
    6MichaelVassar14y
    I see fewer 'selves' and more 'agents' than most people here probably do. In particular, I see all sorts of complex cognitive sub-systems with interests and with the ability to act in the service of those interests within each human. I also see verbal expressions of alleged interests which ignore that complexity and verbalizing sub-systems which attempt to thwart as illegitimate the interests of those other non-verbalizing sub-systems, only to find out that without the cooperation of those other systems they can't actually get anything done. More generally, I think that when you look for ostensible definitions, for instance, for the causes of people claiming that something is 'dehumanizing' or 'disrespectful' and tr to understand the causes of those claims it's not uncommon that you find some legitimate reasons for concern.
    [-][anonymous]14y30

    I like this because it dissolves the question quite effectively. I'm not sure the question should be dissolved, though ... what about the sister?

    This is why I'm not a consequentialist all the way. We may regard it as obvious that cancer is undesirable, but there really may be some who disagree. There are some who disagree that obesity is undesirable. There are some who disagree that depression is undesirable. Health is one issue where most people (in our society) are particularly unlikely to take account of differences in opinion.

    Praise and blame are... (read more)

    5stcredzero14y
    Eating and survival are fundamental functions of life. Someone whose regulatory systems are so out of whack that they are eating/fasting themselves into an early grave, is probably subject to control dysfunctions which have inbuilt advantage over intellectual or social control. Also, punishment is the trickiest of all behavioral modification techniques. It is very likely to backfire, which makes perfect sense. If punishment was very effective on a given individual, he/she would be a perfect slave. Being a perfect slave isn't so great from the perspective of the slave, though it is good for the master. Since human biology doesn't make it easy for a large population of slaves to be related to a master, it makes perfect sense that we'd evolve defenses against punishment. For what it's worth, a member of my band is morbidly obese. He has taken extraordinary measures in terms of effort to lose weight. (Eschewing use of a car in Houston and walking everywhere instead.) His condition is not voluntary.
    4Sly14y
    What do you mean by: his condition is not voluntary? Because he recently made the descision to walk everywhere, yet still remains obese his condition is not voluntary? I am not sure that follows.
    4marks14y
    Bear in mind that having more fat means that the brain gets starved of (glucose)[http://www.loni.ucla.edu/~thompson/ObesityBrain2009.pdf] and blood sugar levels have (impacts on the brain generally)[http://ajpregu.physiology.org/cgi/content/abstract/276/5/R1223]. Some research has indicated that the amount of sugar available to the brain has a relationship with self-control. A moderately obese person may have fat cells that steal so much glucose from their brain that their brain is incapable of mustering the will in order to get them to stop eating poorly. Additionally, the marginal fat person is likely fat because of increased sugar consumption (which has been the main sort of food whose intake has increased since the origins of the obesity epidemic in the 1970s), in particular there has been a great increase in the consumption of fructose: which is capable of raising insulin levels (which signal to the body to start storing energy as fat) while at the same time not activating leptin (which makes you feel full). Thus, people are consuming this substance that may be kicking their bodies into full gear to produce more fat: which leaves them with no energy or will to perform any exercise. The individuals most affected by the obesity epidemic are the poor and recall that some of the cheapest sources of calories available on the market are foods like fructose and processed meats. While there is a component of volition regardless, if the body works as the evidence suggests: they may have a diet that is pushing them quite hard towards being obese, sedentary, and unable to do anything about it. Think about it this way, if you constantly wack me over the head you can probably get me to do all sorts of things that I wouldn't normally do: but it wouldn't be right to call my behavior in that situation "voluntary". Fat people may be in a similar situation.
    1stcredzero14y
    He doesn't want to be morbidly obese. He wasn't always this way. He doesn't want to die early and has tried to mitigate his trajectory into an early grave. How about someone driving a car, skidding on a patch of oil and colliding with the guard rail? Was the collision voluntary? I don't think so, even if the driver in question habitually speeds and lets themselves get distracted. Add in a broken speedometer, and the analogy is complete. (And note that you can't take a human body out of commission like you can refuse an inspection sticker on a car.)
    0Sly14y
    I think I see what you are saying here. So non-obvious side effects of the descision are non voluntary. Colliding from speeding and obesity from overeating/lack of exercise would be arguable non obvious as well. I would say however that the metaphor with the car may be more accurate if the driver was repeatedly skidding into mailboxes and other small things (apparently the ground has many oil patches), so that when he later on collided with the guard rail it was a rather obvious end result.
    2stcredzero14y
    I notice you say "overeating/lack of exercise." I hope one of those two doesn't indicate careless reading. I wouldn't be so glib about adjusting food intake, unless you've done it and kept weight off for some time. Usually, people who have done this know it isn't trivially easy. It's far from easy. Simply fasting for a set period of time is much easier by comparison.
    1Sly14y
    The overeating/lack of exercise had to do with causes of morbid obesity in general. I understand that this person has started to walk as a means of counteracting the lack of exercise, or are you referring to something else I may be misreading? And yes, I understand that adjusting food intake is non trivial. How am I being glib? And how is that relevant to the metaphor? Morbid obesity does not just spring up on you, your weight gradually changes and your eating patterns likely get worse. It is not at all like a sudden patch of oil. It would be accurate to describe the situation in terms of a car driver not putting any maintenance into their car. Eventually the car starts to make strange noises. Later on still, the engine light comes on. As years go by, the car is driving slower and slower. Are we really surprised when the engine stops working altogether?
    2[anonymous]14y
    My point was not that obesity is voluntary, but that it's worth asking whether or not it's voluntary. I don't think you and I disagree, because you made the point that your band friend's condition isn't voluntary. Yvain's post argues that such questions are not important. I think they may be.
    4Scott Alexander14y
    I sort of agree. I didn't treat this issue because the post was already getting too long. We have various incentives to want obese people to become thin: paternalistic concern for their health, negative externalities, selfish reasons if we're their friend or relative and want to continue to enjoy their company without them dying early, aesthetic reasons, the emotional drain of offering them sympathy if we don't think they deserve it. One of the most important reasons is helping them overcome akrasia - if they want to become thinner, us being seen to condemn obesity might help them. If they don't want to become thinner, that incentive goes away. The other incentives might or might not be enough to move us on their own. (usually, though, these things only become issues at the societal level. I can't think of the last time I personally was mean to an obese person, despite having ample opportunities. In that context, I think the feelings of particular obese people on the issue becomes less important)

    Yvain, you have a couple of instances of "(LINK)" in your text. I expect you intended to replace them with links :-).

    1Scott Alexander14y
    I can't imagine what would possibly have given you that idea. (@$!%. Fixed.)

    The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its

    ... (read more)
    9Matt_Simpson14y
    Once you factor in the dangers of giving humans that sort of power, I think that "problem" goes away for the most part.

    The only problem with this is that it works in reverse. We could put people who haven't commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.

    Once you factor in the dangers of giving humans that sort of power, I think that "problem" goes away for the most part.

    I think a lot of you are missing that (a version of) this is already happening, and the connotations of the words "jail" and "imprison" may be misleading you.

    Typically, jail is a place that sucks to be in. But would your opinion change if someone were preventatively "imprisoned" in a place that's actually nice to live in, with great amenities, like a gated community? What if the gated community were, say, the size of a country?

    And there, you see the similarity. Everybody is, in a relevant sense, "imprisoned" in their own country (or international union, etc.). To go to another country, you typically must be vetted for whether you would be dangerous to the others, and if you're regarded as a danger, you're left in your own country. With respect to the rest of the world, then, you have been preventatively imprisoned in ... (read more)

    4GreenRoot14y
    This is an interesting way of thinking about citizenship and immigration, one which I think is useful. I don't think I've ever thought about the way other countries' immigration rules regard me. Thanks for the new thought.
    1Peterdjones13y
    I'd call that aribtrage. I don't see what memetics has got to do with it.
    7SilasBarta13y
    The relevant metaphor here is "killing the goose that lays the golden eggs". A country with pro-prosperity policies is a goose. Filling it with people who haven't assimilated the memes of the people who pass such policies will arguably lead to the end of this wealth production so sought after by immigrants. Arbitrarge doesn't kill metaphorical geese like that: it simply allows people to get existing gold eggs more efficiently. It might destroy one particular seller's source of profit, but does not destroy wealth-production ability that an immigrant-based memetic overload would.
    -10Peterdjones13y

    So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I'm really curious how many consequentialists here would bite that bullet. (It's also an interesting question whether, and to what extent, some elements of the modern criminal justice system already operate that way in practice.)

    [EDIT: To clarify a possible misunderstanding: I don't have in mind an institution that would make accurate predictions about the future behavior of individuals, but an institution that would preventively imprison large groups of people, including many who are by no means guaranteed to be future offenders, according to criteria that are accurate only statistically. (But we assume that they are accurate statistically, so that its aggregate effect is still evaluated as positive by your favored consequentialist calculus.)]

    This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it's strikingly applicable.

    9Scott Alexander14y
    I'll bite that bullet. I already have in the case of insane people and arguably the case terrorists who belong to a terrorist cell and are hatching terrorist plots but haven't committed any attacks yet. But it would have to be pretty darned accurate, and there would have to be a very low margin of error.
    8SilasBarta14y
    Why would this institution necessarily imprison them? Why not just require the different risk classes to buy liability insurance for future damages they'll cause, with the riskier ones paying higher rates? Then they'd only have to imprison the ones that can't pay for their risk. (And prohibition of something for which the person can't bear the risk cost is actually pretty common today; it's just not applied to mere existence in society, at least in your own country.)
    8JGWeissman14y
    If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it. I don't trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option). The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
    3Vladimir_M14y
    Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem). I agree that it's not critical to the main point of the post, but I would say that it's a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.
    2ocr-fork14y
    I would also be ok with this... however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.
    0babblefrog14y
    How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I'd still have a problem with this. "It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer." - Ben Franklin
    1dclayh14y
    An article by SteveLandsburg on a similar quote. And a historical overview of related quotes.
    1ocr-fork14y
    Enough to justify imprisoning everyone. It depends on how long they'd stay in jail, the magnitude of the crime, etc. I really don't care what Ben Franklin thinks.
    2babblefrog14y
    Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately. My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?
    0ocr-fork14y
    It's not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.
    -6Houshalter14y
    7ShardPhoenix14y
    Yes, this is obviously (to me) the right thing to do if possible. For example, we put down rabid dogs before they bite anyone (as far as I know). I can't think of any real-world human-applicable examples off the top of my head, though - although some groups are statistically more liable to crime than others, the utility saved would be far more than outweighed by the disutility of the mass imprisonment.
    5Matt_Simpson14y
    My only reservation is that I might actually intrinsically value "innocent until proven guilty." Drawing the line between intrinsic values and extremely useful but only instrumental values is a difficult problem when faced with the sort of value uncertainty that we [humans] have. So assuming that this isn't an intrinsic value, sure, I'll bite that bullet. If it is, would still bite the bullet assuming that the gains from preemptive imprisonment outweigh the losses associated with preemptive imprisonment being an intrinsic bad.
    3LauraABJ14y
    It seems that one way society tries to avoid the issue of 'preemptive imprisonment' is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.
    1orthonormal14y
    I bite this bullet as well, given JGWeissman's caveat about the probity and reliability of the institution, and Matt Simpson's caveat about taking into account the extra anguish humans feel when suffering for something.
    0khafra14y
    Sexual offender have a high rate of recidivism. Some states keep them locked up indefinitely, past the end of their sentences. Any of the various state laws which allow for involuntary commitment as an inpatient like Florida's Baker Act also match your description.
    4savageorange14y
    Correction: Sexual offenders have an unusually low rate of recidivism (about 7% IIRC); There is certainly a strong false perception that they have a high rate of recidivism, though.
    5JoshuaZ14y
    Correct, the recidivism rate for sexual offenses is generally lower than for the general criminal population in the United States, although the rate calculated varies a lot based on the metric and type of offense. See here . Quoting from that page: "Marshall and Barbaree (1990) found in their review of studies that the recidivism rate for specific types of offenders varied: * Incest offenders ranged between 4 and 10 percent. * Rapists ranged between 7 and 35 percent. * Child molesters with female victims ranged between 10 and 29 percent. * Child molesters with male victims ranged between 13 and 40 percent. * Exhibitionists ranged between 41 and 71 percent." This is in contrast to base rates for reoffense in the US for general crimes which ranges from around 40% to 60% depending on the metric see here. This isn't the only example where recidivism rates for specific types of people have been poorly described. There's been a large deal made by certain political groups that about 20% of people released from Gitmo went on to fight the US. Note also that in Western Europe recividism for the general criminal population is lower. I believe that the recidivism rate for sexual offenses does not seem to correspondingly drop, but I don't have a citation for that. Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.
    -1cupholder14y
    You might still be mostly correct about Western Europe - the UK could be an outlier relative to the rest of Western Europe.
    1Alicorn14y
    Citation, please?
    3JoshuaZ14y
    See my reply to Savageorange where I gave the statistics and citations here. Savage is correct although the phenomenon isn't as strong as Savage makes it out to be.
    8Vladimir_Nesov14y
    If it really does help the society, it's by definition not a problem, but a useful thing to do.
    -1Houshalter14y
    I suppose so, under this point of view, but does that make it right? Also note that "helping society" isn't an exact definition. We will have to draw the line between helping and hurting, and we have already done that with the constitution. We have decided that it is best for society if we don't put innocent people in jail.
    8Vladimir_Nesov14y
    We do put innocent people in prison. If not putting innocent people in prison was the most important thing, we'd have to live without prisons. The tradeoff is there, but it's easier to be hypocritical about it when it's not made explicit.
    -1Houshalter14y
    We do our best not to put innocent people in prison. Actually, I should have been more clear: We try to put all criminals in jail, but not innocent people. And there's something called reasonable doubt.
    9NancyLebovitz14y
    I don't think we do our best not to put innocent people in prison. I think we make some efforts to avoid it, but they're rather half-hearted. For example, consider government resistance to DNA testing for prisoners. Admittedly, this is about keeping people in prison rather than putting them there in the first place, but I think it's an equivalent issue, and I assume the major reason for resisting DNA testing is not wanting to find out that the initial reasons for imprisoning people were inadequate. Also, there's plea bargaining, which I think adds up to saying that we'd rather put people into prison without making the effort to find out whether they're guilty.
    -2Houshalter14y
    What do you mean? They did do DNA testing and discovered that dozens of people in prisons actually were innocent. Thats to make sure that if someone actually is innocent and more evidence comes up later, they can get out rather then rot away for the rest of their lives. Its a good thing.
    5NancyLebovitz14y
    Everything I've read about DNA testing for prisoners has said that it was difficult for them to get the testing done. In some cases, they had to pay for it themselves. Plea bargaining isn't just for life sentences. I'm not sure you understand what plea bargaining is-- it means that a suspect accepts a shorter sentence for a lesser accusation in exchange for not taking the risk of getting convicted of a more serious crime at a trial.
    -2zero_call14y
    That's a flagrant misinterpretation. The OP's intention was to say that innocent people don't get put in prison intentionally.
    5stcredzero14y
    Before things go that far, shouldn't a society set up voluntary programs for treatment? Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions. (Plea bargaining involving attendance of a driving course.)
    6SilasBarta14y
    Very true. As I noted in my other comment, jails necessarily suck to be in, above and beyond the loss of freedom of movement. We just don't have a common, accepted protocol to handle people who are "dangerous to others, though they haven't (yet) done anything wrong, and maybe did good by turning themselves in". Such people would deserve to be detained, but not in a way intended to be unpleasant. The closest examples I can think of for this kind of treatment (other than the international border system I described in the other comment) are halfway houses, quarantining, jury sequestration, and insane asylums (in those cases where the inmate has just gone nuts but not committed violent crimes yet). There needs to be a more standard protocol for these intermediate cases, which would look similar to minimum security prisons, but not require you to have committed a crime, and be focused on making you less dangerous so you can be released.
    5MichaelVassar14y
    Great point. in real life one should usually look for the best available option when considering a potentially costly change rather than just choosing one hard contrarian choice on a multiple choice test. The fact that we have conflicting intuitions on a point is probably evidence that better 'third way' options exist.
    -2Houshalter14y
    Who would volunteer to go to jail? Seriously, if the cops came to your door and told you that because your statistics suggested you were likely to commit a crime and you had to go to a "rehabilitation program", would you want to go, or resist (if possible)? From this, hypothetical, point of view, there is no difference. There is no real punishment, but you can hardly call sending someone to jail or worse, execution, treatment.
    0Patashu13y
    Jails don't HAVE to be places of cruel and unusual punishment, as they are currently in the US. The prisons in Norway, for instance, are humane - they almost look like real homes. The purpose of a jail is served (ensuring people can't harm those in society) while diminishing side effects as much as possible and encouraging rehabilitation. Example: http://www.globalpost.com/dispatch/europe/091017/norway-open-prison
    0Houshalter13y
    Thats the problem, where do you draw the line between rehabilitation and punishment? Getting criminals out of society is one benefit of prisons, but so is creating deterent to commit crimes. If I was a poor person and prison was this nice awesome place full of luxuries, I might actually want to go to prison. Obviously thats an extreme example, but how much of a cost getting caught is certainly plays a role when you ponder commiting a crime. In ancient societies, they had barbaric punishments for criminals. The crime rate was high and they were rarely caught. And when resources are limited, providing someone free food and shelter is to costly and starving people might actually try to get in. Not to mention they didn't have any ways of rehabilitating people. Personally I am in favor of more rehabilitation. There are alot of repeat offenders in jail, and most criminals are irrational and afffected by bias anyways, so trying treating them like rational agents doesn't work.
    3Patashu13y
    In the case where someone wishes to commit a crime so they can spend time in jail, they'll probably perform something petty, which isn't TOO bad especially if they can confess and the goods be returned (or an equivalent). If social planning can lower the poverty rate and provide ample social nets and re-education for people in a bad spot in their lives in the first place, this thing is also less likely to be a problem (conversely, if more people become poor, prisons will be pressured to become worse to keep them below the perceived bottom line). Finally, prison can be made to be nice, but it isolates you from friends, family and all places outside the prison, and imposes routine on you, so if you desire control over your life you'll be discouraged from going there.
    2AShepard14y
    You might check out Gary Becker's writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.

    Now do schizophrenia?

    ...the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those wh

    ... (read more)

    Great post!

    However, I have the following problem with the scenario - I have hard time trusting a doctor, who prescribes a diet pill and consultation with a surgeon, but omits healthy diet and exercise. (Genetic predisposition does not trump the laws of thermodynamics!)

    In general, I don't know of any existing medicine that can effectively replace willpower when treating addiction - which is why treatment is so difficult in the first place.

    Psychology tells us that, on the individual level, encouragement works better than blame. Although both have far less impact than one would hope.

    The way I see it, we are blaming the 'intelligence' process for the things that this process had caused or had the power to prevent, and we aren't blaming it for other things where it was powerless. A bad outcome (like obesity) implies character flaw if less flawed character would not end up with this outcome. And it is perfectly consistent with the notion that the process itself had been shaped by things outside it's control. A bad AI is a bad AI even though it's programmer's fault; a badly designed bridge is a bad bridge even though it is architect's fau... (read more)

    Very good article!

    A couple of comments:

    So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize.

    Almost agreed: It is also important to recheck criterion 4:

    Something unpleasant; when you have it, you want to get rid of it

    to see if reducing the incidence of the disease is actually a worthwhile goal.

    On another note:

    Cancer satisfies every one of these criteria,

    ... (read more)
    3Scott Alexander14y
    Good points. But prostate cancer might be an "ostrich" version of cancer (see the link on "ostrich" above) and something like breast cancer might be considered more like a type specimen.

    Slightly off-topic, but I was reading in Bernard Williams's Ethics and the Limits of Philosophy last night and this quote about a difference between deontological and consequentialist ethics caught my attention:

    Obligation and duty look backwards, or at least sideways. The acts they require, supposing one is deliberating about what to do, lie in the future, but the reasons for those acts lie in the fact that I have already promised, the job I have undertaken, the position I am already in. Another kind of ethical consideration looks forward , to the outcomes of the acts open to me.

    Great article

    Taking a determinist consequentialist position allows us to do so more effectively

    This sounds a little timid; being a determinist consequentialist is not an instrument that allows us some goal (an accidental implication I am sure), it is an honest outlook by itself.

    I would argue that the utility of a treatment also depends on the particular proximate genetic and/or environmental causes of the disease/illness/problem at hand.

    Let's imagine two obese individuals, person A and B.

    Person A's obesity can be attributed to some sort of genetic propensity to be eating more than the average person, e.g., having lower than average control of impulse, getting rewarded by high-calorie foods more than average, suffering more than average from exercising, etc.

    Person B uses highly rewarding, high-calorie foods as a way to regulate ne... (read more)

    If people really believe learning personal responsibility is more important than being not addicted to heroin, [...]

    Please feel free to correct me in case I misunderstood your point here, but I think that's an unfair one you raise because originally it's about the choice between the application of two different approaches (help on a biological vs. help on a social level) in case they both produce the same output—in your example, however, you adjust the outcome according to your desired conclusion (and it's fairly obvious to chose the one that actually helps).

    Edit: I'm new to this site and just realized I'm a little bit late for this discussion, sorry about that.

    This is a very well written post which I enjoyed reading quite a bit. The writing is clear, the (well cited!) application of ideas developed on LW to the problem is great to support further building on them, and your analysis of the conventional wisdom regarding disease and blameworthiness as a consequence of a deontologist libertarian ethics rang true for me and helped me to understand my own thinking on the issue better.

    Thanks for the care you put into this post.

    Great post. I try to give the nutshell version of this type of reasoning every time I get dragged into an abortion debate or the debate addressed in this post. People are much more receptive to this sort of thinking for diseases than they are for abortion.

    3Blueberry14y
    How does it apply to abortion? I'm not sure what you mean.
    3Matt_Simpson14y
    Much of the abortion debate is over whether a fetus counts as a "person."
    3Blueberry14y
    I'm still not sure I understand. So you're saying to taboo the term 'person' (a being with moral rights)? That still doesn't address the main point, which is balancing the value of the fetus against the rights of the mother.
    6Matt_Simpson14y
    Not exactly. "Is a fetus a person?" is a disguised query. When you ask that question, you are really asking, "should we allow women to abort fetuses?" Which is, as you said, the main point. But that doesn't stop some people from arguing semantics.

    No, that's not the same question at all. Suppose we agree that a fetus is a person: that is, that a fetus should have the same moral rights as an adult. It's still not at all clear whether abortion should be legal. One of J. J. Thomson's thought experiments addresses this point: suppose you wake up and find yourself being used as a life support machine for a famous violinist. Do you have the right to disconnect the violinist? Thompson argued that you did, and thus people should have the right to an abortion, even if a fetus is a person.

    Alternatively, consider something like the endangered species act: no one thinks that a spotted owl or other endangered species is a person, but there are many people who think that we shouldn't be allowed to kill them freely.

    No, that's not the same question at all.

    You're missing my point. I'm not saying that it's the same question. Many times when people get into the abortion debate, they start arguing over whether a fetus is a person. The pro-choice side will point out the dissimilarities between a fetus and a human. The pro-life side will counter with the similarities. All of this is in an effort to show that a fetus is a "person." But that isn't really the relevant question. Say they finally settle the issue and come up with a suitable definition of "person" which includes fetuses of a certain age. Should abortion be allowed? Well, they don't really know. But they will try to use the definition to answer that question.

    This is what I mean when I say that "is a fetus a person?" is a disguised query. The real question at issue is "should abortion be allowed?" They aren't the same question at all, but in most debates, once you have the answer to the first you have the answer to the second, and it shouldn't be that way because the first question is mostly irrelevant.

    6Blueberry14y
    Ah, I see! Yes, I agree completely. ETA: And most people in abortion debates don't seem to realize this. There are also the questions of whether it should be legal even if it's unethical (to avoid unsafe abortions that kill the mother), and whether abortion law should be decided at the state or federal level, which also get confused with the other questions. You can oppose Roe on federalism grounds even if you support abortion.
    0NancyLebovitz14y
    :"Is a fetus a person?" isn't just about abortion, but about other rights for fetuses as well. If a fetus is a person, is the woman carrying it legally obligated to not endanger it?
    4Blueberry14y
    I still think that's a disguised query. Whether a fetus is a person is a separate question from whether a woman is obligated to not endanger it. For instance, protected species of animals are not people, but we are legally obligated to not endanger them in certain ways. Convicted murderers on death row, enemy soldiers at war, and people trying to kill you are considered people, but in some situations involving such people, there is no legal obligation to not endanger them. I can consistently think a fetus is a person, but that there should be no requirement to not endanger it, and vice versa.

    Our attitudes toward people with marginal conditions mainly reflect a deontologist libertarian (libertarian as in "free will", not as in "against government") model of blame. In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance. People who make good decisions are intrinsically good people and deserve good treatment; people who make bad decisions are intrinsically bad people and deserve bad treatment. But people who make bad decisions for reasons that are outside of

    ... (read more)

    Test for Consequentialism:

    Suppose you are a judge in deciding whether person X or Y commited a murder. Let's also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won't reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y? (Befor... (read more)

    2Richard_Kennaway10y
    By condemning X, I uphold the people's trust in the justice system, while making it unworthy of that trust. By condemning Y, I reduce the people's trust in the justice system, while making the system worthy of their trust. But what is their trust worth, without the reality that they trust in? If I intend the justice system to be worthy of confidence, I desire to act to make it worthy of confidence. If I intend it to be unworthy of confidence, I desire to act to make it unworthy of confidence. Let me not become unattached to my desires, nor attached to what I do not desire. Also, there is no Least Convenient Possible World. The Least Convenient Possible World for your interlocutors is the Most Convenient Possible World for yourself, the one where you get to just say "Suppose that such and such, which you think is Bad, were actually Good. Then it would be Good, wouldn't it?"
    0Roxolan10y
    In the least convenient possible world, condemning an innocent in this one case will not make the system generally less worthy of confidence. Maybe you know it will never happen again.
    6Richard_Kennaway10y
    Maybe everyone would have a pony. ETA: It is not for the proponent of an argument to fabricate a Least Convenient Possible World -- that is, a Most Convenient Possible World for themselves -- and insist that their interlocutors address it, brushing aside every argument they make by inventing more and more Conveniences. The more you add to the scenario, the smaller the sliver of potential reality you are talking about. The endpoint of this is the world in which the desired conclusion has been made true by definition, at which point the claim no longer refers to anything at all. The discipline of the Least Convenient Possible World is a discipline for oneself, not a weapon to point at others. If I, this hypothetical judge, am willing to have the innocent punished and the guilty set free, to preserve confidence that the guilty are punished and the innocent are set free, I must be willing that I and my fellow judges do the same in every such case. Call this the Categorical Imperative, call it TDT, that is where it leads, at the speed of thought, not the speed of time: to take one step is to have travelled the whole way. I would have decided to blow with the mob and call it justice. It cannot be done.
    2Jiro10y
    The categorical imperative ignores the possibility of mixed strategies--it may be that doing X all the time is bad, doing Y all the time is bad, but doing a mixture of X and Y is not. For instance, if everyone only had sex with someone of the same sex, that would destroy society by lack of children. (And if everyone only had sex with someone of the opposite sex, gays would be unsatisfied, of course.) The appropriate thing to do, is to allow everyone to have sex with the type of partner that fits their preferences. Or to put it another way, "doing the same thing" and "in the same kind of case" depend on exactly what you count as the same--is the "same" thing "having only gay sex" or "having either type of sex depending on one's preference"? In the punishment case, it may be that we're better off with a mixed strategy of sometimes killing innocent people and sometimes not; if you always kill innocent people, the justice system is worthless, but if you never kill innocent people, people have no confidence in the justice system and it also ends up being worthless. The optimal thing to do may be to kill innocent people a certain percentage of the time, or only in high profile public cases, or whatever. Asking "would you be willing to kill innocent people all the time" would be as inappropriate as asking "would you be willing to be in a society where people (when having sex) have gay sex all the time". You might be willing to do the "same thing" all the time where the "same thing" means "follow the public's preference, which sometimes leads to killing the innocent" (not "always kill the innocent ") just like in the gay sex example it means "follow someone's sexual preference, which sometimes leads to gay sex" (not "always have gay sex").
    0Richard_Kennaway10y
    Yes, the categorical imperative has the problem of deciding on the reference class, as do TDT, the outside view, and every attempt to decide what precedent will be set by some action, or what precedent the past has set for some decision. Eliezer coined the phrase "reference class tennis" to refer to the broken sort of argumentation that consists of choosing competing reference classes in order to reach desired conclusions. So how do you decide on the right reference class, rather than the one that lets you conclude what you already wanted to for other reasons? TDT, being more formalised (or intended to be, if MIRI and others ever work out exactly what it is) suggests a computational answer to this question. The class that your decision sets a precedent for is the class that shares the attributes that you actually used in making your decision -- the class that you would, in fact, make the same decision for. This is not a solution to the reference class problem, or even an outline of a solution; it is only a pointer in a direction where a solution might be found. And even if TDT is formalised and gives a mathematical solution to the reference class problem, we may be in the same situation as we are with Bayesian reasoning: we can, and statisticians do, actually apply Bayes theorem in cases where the actual numbers are available to us, but "deep" Bayesianism can only be practiced by heuristic approximation.
    0Jiro10y
    "Would you like it if everyone did X" is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That's really an objection to the categorical imperative, period. I can always say that I'm not really objecting to the categorical imperative in such a situation by rephrasing it in terms of a reference class "would you like it if everyone performed some algorithm that produced X some of the time", but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits. An average person not from this site would not even comprehend "would you like it if everyone performed some algorithm with varying results" as a case of the golden rule, categorical imperative, or whatever, and certainly wouldn't think of it as an example of everyone doing the "same thing". In most people's minds, doing the same thing means to perform a simple action, not an algorithm.
    0Richard_Kennaway10y
    In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI. Or more briefly, it still fits. But you have to actually make the die roll. What "an average person not from this site" would or would not comprehend by a thing is not relevant to discussions of the thing itself.
    2Jiro10y
    In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action. For instance, I could justify robbery with "How would I like it, if everyone were to execute 'if (person is Jiro) then rob else do nothing'". The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable. That's too brief. Because :"what do most people mean when they say this" actually matters. They clearly don't mean for it to include "if (person is Jiro) then rob else do nothing" as a single action that can be universalized by the rule.
    1A1987dM10y
    The reason that doesn't work is that people who are not Jiro would not like it if everyone were to execute 'if (person is Jiro) then rob else do nothing', so they couldn't justify you robbing that way. The fact that the rule contains a gerrymandered reference class isn't by itself a problem.
    0nshepperd10y
    Does the categorical imperative require everyone to agree on what they would like or dislike? That seems brittle.
    0Jiro10y
    I've always heard it, the Golden Rule, and other variations to be some form of "would you like it if everyone were to do that?" I've never heard of it as "would everyone like it if everyone were to do that?". I don't know where army1987 is getting the second version from.
    0A1987dM10y
    This post discusses the possibility of people “not in moral communion” with us, with the example of a future society of wireheads.
    0Richard_Kennaway10y
    Doing which is reference class tennis, as I said. The solution is to not do that, to not write the bottom line of your argument and then invent whatever dishonest string of reasoning will end there. No kidding. And indeed some are not, as you clearly understand, from your ability to make up an example of one. So what's the problem?
    0nshepperd10y
    What principle determines what actions are unacceptable apart from "they lead to a bottom line I don't like"? That's the problem. Without any prescription for that, the CI fails to constrain your actions, and you're reduced to simply doing whatever you want anyway.
    2Richard_Kennaway10y
    This asserts a meta-meta-ethical proposition that you must have explicit principles to prescribe all your actions, without which you are lost in a moral void. Yet observably there are good and decent people in the world who do not reflect on such things much, or at all. If to begin to think about ethics immediately casts you into a moral void where for lack of yet worked out principles you can no longer discern good from evil, you're doing it wrong.
    2nshepperd10y
    Look, I have no problem with basing ethics on moral intuitions, and what we actually want. References to right and wrong are after all stored only in our heads. But in the specific context of a discussion of the Categorical Imperative—which is supposed to be a principle forbidding "categorically" certain decisions—there needs to be some rule explaining what "universalizable" actions are not permitted, for the CI to make meaningful prescriptions. If you simply decide what actions are permitted based on whether you (intuitively) approve of the outcome, then the Imperative is doing no real work whatsoever.
    2TheAncientGeek10y
    If, like most people, you don't want to be murdered, the CI will tell you not to murder. If you don't want to be robbed, it will tell you not to rob. Etc. It does work for the normal majority, and the abnornmal minority are probably going to be a problem under any system.
    4nshepperd10y
    Please read the above thread and understand the problem before replying. But for your benefit, I'll repeat it: explain to me, in step-by-step reasoning, how the categorical imperative forbids me from taking the action "if (I am nshepperd) then rob else do nothing". It certainly seems like it would be very favourable to me if everyone did "if (I am nshepperd) then rob else do nothing".
    0TheAncientGeek10y
    That's a blatant cheat. How can you have a universal law that includes a specific exception for a named individual?
    4Desrtopa10y
    The way nshepperd just described. It is, after all, a universal law, applied in every situation. It just returns different results for a specific individual. We can call a situation-sensitive law like this a piecewise law. Most people would probably not want to live in a society with a universal law not to steal unless you are a particular person, if they didn't know in advance whether or not the person would be them, so it's a law one is unlikely to support from behind a veil of ignorance. However, some piecewise laws do better behind veils of ignorance than non-piecewise universal laws. For instance, laws which distinguish our treatment of introverts from extroverts stand to outperform ones which treat both according to the same standard. You can rescue non piecewise categorical imperatives by raising them to a higher level of abstraction, but in order to keep them from being outperformed by piecewise imperatives, you need levels of abstraction higher than, for example "Don't steal." At a sufficient level of abstraction, categorical imperatives stop being actionable guides, and become something more like descriptions of our fundamental values.
    0TheAncientGeek10y
    I'm all in favour of going to higher levels of abstraction. Its much better appreach than coding in kittens-are-nice and slugs-are-nasty.
    0Lumifer10y
    Is there anything that makes it qualitatively different from if (subject == A) { return X } elsif (subject==B) { return Y } elsif (subject==C) { return Z } ... etc. etc.?
    3Jiro10y
    No, there isn't any real difference from that, which is why the example demonstrates a flaw in the Categorical Imperative. Any non-universal law can be expressed as a universal law. "The law is 'you can rob', but the law should only be applied to Jiro" is a non-universal law, but "The law is 'if (I am Jiro) then rob else do nothing' and this law is applied to everyone" is a universal law that has the same effect. Because of this ability to express one in terms of the other, saying "you should only do things if you would like for them to be universally applied" fails to provide any constraints at all, and is useless. Of course, most people don't consider such universal laws to be universal laws, but on the other hand I'm not convinced that they are consistent when they say so--for instance "if (I am convicted of robbery) then put me in jail else nothing" is a law that is of similar form but which most people would consider a legitimate universalizable law.
    -2TheAncientGeek10y
    If the law gives different results for different people doing the same thing, it isn't universal jn the intended sense, which is pretty much the .same as fairness.
    1Jiro10y
    "In the intended sense" is not a useful description compared to actually writing down a description. It also may not necessarily even be consistent. Furthermore, it's clear that most people consider "if (I am convicted of robbery) then put me in jail else nothing" to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).
    -10TheAncientGeek10y
    2Desrtopa10y
    I don't think there is, but then, I don't think that classifying things as universal law or not is usually very useful in terms of moral guidelines anyway. I consider the Categorical Imperative to be a failed model.
    0TheAncientGeek10y
    Why is it failed? A counterexample was put forward that isn't a universal law. That doesn't prove the .CI to be wrong. So what does? We already adjust rules by reference classes, since we have different rules for minors and the insane. Maybe we just need rules that are apt to the reference class and impartial within it.
    2Desrtopa10y
    When you raise it to high enough levels of abstraction that the Categorical Imperative stops giving worse advice than other models behind a veil of ignorance, it effectively stops giving advice at all due to being too abstract to apply to any particular situation with human intelligence. You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you've essentially defeated any purpose of treating actions as if they were generalizable to universal law.
    0TheAncientGeek10y
    I'd lovely know the meta model you are using to judge between models. Universal isn't really universal, since you can't prove mathematial theorem to stones. Fairness within a reference class counts.
    2Desrtopa10y
    I think I've already made that implicit in my earlier comments; I'm judging based on the ability of a society run on such a model to appeal to people from behind a veil of ignorance
    -2TheAncientGeek10y
    I think that is a false dichotomy. One rule for everybody may well fail: Everybody has their own rule may well fai. However, there is till the tertium datur of N>1 rules for M>1 people. Which is kind of how legal systems work in the real world.
    2Desrtopa10y
    Legal systems that were in place before any sort of Categorical Imperative formulation, and did not particularly change in response to it. I think our own legal systems could be substantially improved upon, but that's a discussion of its own. Do you think that the Categorical Imperative formulation has helped us, morally speaking, and if so how?
    -6TheAncientGeek10y
    1Jiro10y
    If we have different rules for minors and the insane, why can't we have different rules for Jiro? "Jiro" is certainly as good a reference class as "minors".
    -2TheAncientGeek10y
    Remember the "apt". You would need to explain why you need those particular rules.
    1Vaniver10y
    Explain to who? And do I just have to explain it, or do they have to agree?
    -8TheAncientGeek10y
    0A1987dM10y
    A qualitative difference is a quantitative difference that is large enough.
    0Lumifer10y
    Sometimes. Not always.
    -2TheAncientGeek10y
    It's not like the issue has never been noticed or addressed: "Hypothetical imperatives apply to someone dependent on them having certain ends to the meaning: if I wish to quench my thirst, I must drink something; if I wish to acquire knowledge, I must learn. A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation: Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.[1] "--WP
    0Roxolan10y
    If that's what makes the world least convenient, sure. You're trying for a reductio ad absurdum, but the LCPW is allowed to be pretty absurd. It exists only to push philosophies to their extremes and to prevent evasions. Your tone is getting unpleasant. EDIT: yes, this was before the ETA.
    2Richard_Kennaway10y
    I think you replied before my ETA. The LCPW is, in fact, not allowed to be pretty absurd. When pushed on one's interlocutors, it does not prevent evasions, it is an evasion.
    0alexg10y
    You're kind of missing the point here. I probably should have clarified my position more The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the offender is, by assumption, unlikely to reoffend, the everyone else's criminal behaviors are the main factor here, and these are minimised through the justice system's reputation. (I also should have added the assumption that attempts to convince people of the truth have failed). By prosecuting X you are acheiving this purpose. The Least Convenient Possible World is the one where there's not a third way, or additional factor (I hadn't thought of) that lets you get out of this. Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning! EDIT: Grammer EDIT: The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.
    1Richard_Kennaway10y
    This ignores the causal relationships. How is punishing the innocent supposed to create a stabler society? Because, in your scenario, it's just this once and no-one will ever know. But it's never just this once, and people (the judge, X, and Y at least) will know. As one might observe from a glance at the news from time to time. All you're doing is saying, "But what if it really was just this once and no-one would ever know?" To which the answer is, "How will you know?" To which the LCPW replies "But what if you did know?", engulfing the objection and Borgifying it into an extra hypothesis of your own. You might as well jump straight to your desired conclusion and say "But what if it really was Good, not Bad?" and you are no longer talking about anything in reality. Reality itself is the Least Convenient Possible World.
    0Richard_Kennaway10y
    I don't think you understand what "rationality is about winning" means. It is explained here, here, and here.
    0alexg10y
    Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.
    2Richard_Kennaway10y
    See my reply to Roxolan.

    The disease characteristics is where this essay breaks down. Those don't really line up with any medical definition of disease. Seems like he redefines disease in order to deconstruct it a bit.

    5beoShaffer12y
    It does however, fit with (my impressions of) the way people use the word in real life, which is far more relevant to the point of this article.
    2fubarobfusco12y
    Could you be more specific? Which characteristics do you dispute, and which other ones would you propose?
    1ACuriousMan12y
    Most of, if not all of them have nothing to do with what disease is. He is creating a definition wholecloth through his characteristics. disease /dis·ease/ (dĭ-zēz´) any deviation from or interruption of the normal structure or function of any body part, organ, or system that is manifested by a characteristic set of symptoms and signs and whose etiology, pathology, and prognosis may be known or unknown.
    8fubarobfusco12y
    Ah. I think you are looking for something different in definitions than Yvain is getting at here. Have you read the linked posts "Disguised Queries", "The Cluster Structure of Thingspace", and "Words as Hidden Inferences"? These might explain some of the difference.
    -4ACuriousMan12y
    And?
    2fubarobfusco12y
    Ah. I had assumed you were expressing curiosity, not merely contradiction. My mistake. Sorry about that.
    0ACuriousMan12y
    It isn't "mere contradiction". It is a looking at what the writer is doing rhetorically and questioning the root of his argument. Again his characteristics of disease have nothing to do with our medical understanding of disease. Disease means something rather specific in the medical profession, and just throwing up a bunch of characteristics based on nothing more than he writers intuition (and with no supporting evidence) is a horrible foundation for an argument.

    Disease does mean something specific to doctors, but doctors aren't the only ones asking questions like "Is obesity really a disease?"

    And when people ask that question, what matters to them isn't really whether obesity matches the dictionary definition. In practice, it does boil down to trying to figure out whether the obesity should be treated medically, and whether obese people deserve sympathy. (On occasion, another question that is asked is "Does the condition need to be 'fixed' at all?")

    You can't answer these questions by checking the dictionary to see if obesity is a disease. In general, thinking of "disease" as a basic concept results in confusion. If you're not certain whether obesity is a disease, and what you really want to know is whether it should be treated medically, then the right thing to do is to first figure out "What about diseases makes medical intervention a good idea?" And then you figure out whether obesity satisfies the criteria you come up with.

    -5ACuriousMan11y

    I generally agree with your article, but it has at least one false premise:

    Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution.

    But many undesirable conditions that are caused by genetic or environmental sources are continuous. Cancer is actually one of them, as far as I understand: there are many different kinds of cancer, and the symptoms can vary in severity (though all are fatal if left untreated). The common cold is another example, though of course it is rarely fatal.

    In the mental health area the polar extreme from the pathology model is the "neurodiversity" model. The point about allowing treatment when it is available and effective, whether the treatment is an "enhancement" or a "cure" is also worthwhile.

    In the area of obesity, I think we are pretty open, as a society, to letting the evidence guide us. In the area of mental health, we are probably less so, although I do think that empirical evidence about the nature of homosexuality has been decisive in driving a dramatic change in pub... (read more)

    And then someone points out how bacteria might be involved in creating obesity.

    the sorts of thing you study in biology: proteins, bacteria, ions, viruses, genes.

    Ions->prions

    3wizzwizz45y
    You also study ions, though. You study ethene!

    It was an interesting read. I am a little confused about one aspect, though, that is determinist consequentialism.

    From what I read, it appears a determinist consequentialist believes it is 'biology all the way down' meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?

    Or are there some things that are not biologically determined, like whether to approve someone else's actions or not, while actions physically impacting others are themsleves com... (read more)

    4RobinZ14y
    I think you might be confused on the matter of free will - it's not obvious that there is any conflict between determinism and choice.
    0Ganapati14y
    I used the word choice, but 'free will' do as well. Was your response to my question biologically determined or was it a matter of conscious choice? Whether there is going to be another response to this comment of mine or not, would it have been completely determined biologically or would it be a matter of conscious choice by some? If all human actions are determined biologically the 'choice' is only an apparent one, like a tossed up coin having a 'choice' of turning up heads or tails. Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!

    Was your response to my question biologically determined or was it a matter of conscious choice?

    The correct answer to this is "both" (and it is a false dichotomy). My consciousness is a property of a certain collection of matter which can be most compactly described by reference to the regularities we call "biology". Choosing to answer (or not to answer) is the result of a decision procedure arising out of the matter residing (to a rough approximation) in my braincase.

    The difference between me and a coin is that a coin is a largely homogenous lump of metal and does not contain anything like a "choice mechanism", whereas among the regularities we call "biology" we find some patterns that reliably allow organisms (and even machines) to steer the future toward preferred directions, and which we call "choosing" or "deciding".

    5Mitchell_Porter14y
    Do your choices have causes? Do those causes have causes? Determinism doesn't have to mean epiphenomenalism. Metaphysically, epiphenomenalism - the belief that consciousness has no causal power - is a lot like belief in true free will - consciousness as an uncaused cause - in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.) Increase in consciousness increases the extent to which the causes of one's choices and actions are themselves conscious in origin rather than unconscious. This may be experienced as liberation from cause and effect, but really it's just liberation from unconscious causes. Choices do have causes, whether or not you're aware of them. This is a point which throws many people, but again, it comes from an insufficiently broad concept of causality. Reason itself has causes and operates as a cause. We can agree, surely, that absurdly wrong beliefs have a cause; we can understand why a person raised in a cult may believe its dogmas. Correct beliefs also have a cause. Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved. Nonetheless, despite this limited evolutionary gift, it may be true that we are deterministically doomed to fundamental error or ignorance in certain matters. Since the relationship of consciousness, knowledge, and reality is not exactly clear, it's hard to be sure.
    0Ganapati14y
    I don't equate determinism with epiphenomenalism, but that even when it acts as a cause, it is completely determined meaning the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made. Not sure how that follows. Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species. If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim. It appears that the problem seems to be one of 'generalisation from one example'. There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined. The former may choose determinism as a 'belief in belief' while the latter will see it as a fact, much like a self-aware AI.
    4prase14y
    That's true. And there is no problem within it. If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage. Yes, but it is a very weak evidence (more so, if current models are correct). The claim stated that there was at least some capacity for correct cognition, not that the cognition is perfect. Can you explain the meaning? What are the former and what are the latter beings?
    2Ganapati14y
    Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn't care much about cognitive capacity beyond a point (that you already mentioned) You are already familiar with the latter, those whose consciousness is biologically determined. How do you expect to recognise the former, those whose consciousness is not biologically determined?
    2prase14y
    At least they probably hadn't a deceptive cognitive capacity. That is, they had few beliefs, but that few were more or less correct. I am not saying that an intelligent species is universally better at survival than a dumb species. I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage. Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception. Therefore I conclude that consciousness must be physically and therefore bilogically determined. I don't expect to recognise "deterministic creatures" from "non-determinist creatures", I simply expect the latter can't exist in this world. Or maybe I even can't imagine what could it possibly mean for consciousness to be not biologically determined. From my point of view, it could mean either a very bizarre form of dualism (consciousness is separated from the material world, but by chance it reflects correctly what happens in the material world), or it could mean that the natural laws aren't entirely deterministic. But I don't call the latter possibility "free will", I call it "randomness". Your line of thought reminds me of a class of apologetics which claim that if we have evolved by random chance, then there is no guarantee that our cognition is correct, and if our cognition is flawed, we are not able to recognise that we have evolved by random chance; therefore, holding a position that we have evolved by random chance is incoherent and God must have been involved in the process. I think this class of arguments is called "presuppositionalist", but I may be wrong. Whatever is the name, the argument is a fallacy. That our cognition is
    -4Ganapati14y
    Unless the delusions are related to survival and procreation, don't see how they would present any evolutionary disadvantage. Actually there is plenty of evidence to show that living creatures require additional laws to be predicted. Darwinian evolution itself is not required to describe the physical world. However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws. Someone else looking at the exact same evidence, can come to an entirely different conclusion, that we are actually on the verge of demonstrating what we always felt, that the living are more than physics. Both the positions are based on something that has not yet been demonstrated, the only "evidence" for either lying with the individual, a case of generalisation from one example. Not at all. I was only questioning the logical consistency of an approach called 'determinist consequentialism'. Determinism implies a future that is predetermined and potentially predictable. Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a 'free will' or 'randomness'.
    1prase14y
    Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere. An example would be helpful. I don't know what evidence you are speaking about. What is the difference between respecting physical laws and not violating them? Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn't mean that the predictions of future are practically feasible or even easy. Consequentialism doesn't require either. The choices needn't be principially unpredictable to be meaningful.
    0Ganapati14y
    Perhaps. But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation. On the contrary it could present an evolutionary adavantage. A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage. The evidence was stated in the very next line, the Darwinian evolution, something that is not required to describe the evolution of non-biological systems. Of course, none. The distinction I wanted to make was one between respecting/not-violating and being completely determined by. Nothing to differ there as a definition of determinism. It was exactly the point I was making too. If biological systems are, like us, are completely determined by physical laws, the apparent choice of making a decision by considering consequences is itself an illusion. In which case every choice every entity makes, regardless of how it arrives at it, is meaningful. In other words there are no meaningless choices in the real world.
    2prase14y
    Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward. Sounds like a group selection to me. And not much in accordance with observation. Although I don't believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren't much sucessful after all. Depends on interpretation of "required". If it means that practically one cannot derive useful statements about trilobites from Schrödinger equation, then yes, I agree. If it means that laws of evolution are logically independent laws which we would need to keep even if we overcome all computational and data-storage difficulties, then I disagree. I expect you meant the first interpretation, given your last paragraph.
    0Ganapati14y
    Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don't pose an evolutionary disadvantage. I am more inclined towards the gene selection theory, not group selection. About the only species whose delusions we can observe are ourselves. So it is difficult to come out wth any significant objective observational data. I didn't mean Jews, I meant human species. If delusions are not genetically determined, what would be their source, from a deterministic point of view?
    1prase14y
    Peacock tail survival disadvantage isn't limited to post-reproduction period. In order to explain the existence of the tails, it must be shown that their positive effect is greater than the negative. I don't dispute that (probably large) part of the human brain's capacity is used in the peacock-tail manner as a signal of fitness. What I say is only that having two brains of same energetic demands, the one with more correct cognition is in advantage; their signalling value is the same, so any peacock mechanism shouldn't favour the deluded one. This doesn't constitute proof of the correctness of human cognition, perhaps (almost certainly) some parts of our brain's design is wrong in a way that no single mutation can repair, like the blind spot on human retina. But the evolutionary argument for correctness can't be dismissed as irrelevant.
    -2Ganapati14y
    If delusions presented only survival dsiadvantages and no advantages, you are right. However, that need not be the case. The delusion about an afterlife can co-exist with correct cognition in matters affecting immediate survival and when it does, it can enhance survival chances. So evolution doesn't automatically lead to/enhance correct cognition. I am not saying correctness plays no role, but isn't the sole deciding factor, at least not in the case of evolutionary selection.
    1CarlShulman14y
    This post is relevant.
    2Jack14y
    Huh? Presumably if the dinosaurs had the cognitive capacity and the opposable thumbs to develop rocket ships and divert incoming asteroids they would have survived. They died out because they weren't smart enough.
    3cousin_it14y
    I will side with Ganapati on this particular point. We humans are spending much more cognitive capacity, with much more success, on inventing new ways to make ourselves extinct than we do on asteroid defense. And dinosaurs stayed around much longer than us anyway. So the jury is still out on whether intelligence helps a species avoid extinction. prase's original argument still stands, though. Having a big brain may or may not give you a survival advantage, but having a big non-working brain is certainly a waste that evolution would have erased in mere tens of generations, so if you have a big brain at all, chances are that it's working mostly correctly. ETA: disregard that last paragraph. It's blatantly wrong. Evolution didn't erase peacock tails.
    3Jack14y
    The asteroid argument aside it seems to me bordering on obvious that general intelligence is adaptive, even if taken to an extreme it can get a species into trouble. (1) Unless you think general intelligence is only helpful for sexual selection it has to be adaptive or we wouldn't have it (since it is clearly the product of more than one mutation). (2) Intelligence appears to use a lot of energy such that if it wasn't beneficial it would be a tremendous waste. (3) There are many obvious causal connections between general intelligence and survival. It enabled us to construct axes, spears harness fire, communicate hunting strategies, pass down hunting and gathering techniques to the next generation, navigate status hierarchies etc. All technologies that have fairly straight forward relations to increased survival. And the fact that we're doing more to invent new ways to kill ourselves instead of protect ourselves can be traced pretty directly to collective action problems and a whole slew of evolved features other than intelligence that were once adaptive but have ceased to be-- tribalism most obviously.
    5JoshuaZ14y
    The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it. There's also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock's tail). See Norretranders's "The Generous Man"" for example. A number of biologists such as Dawkins take this hypothesis very seriously.
    4Jack14y
    Thats an explanation that explains the increase in intelligence from apes to humans and my comment was a lot about that but the original disputed claim was And there are less complex adaptive behaviors that require correct cognition: identifying prey, identifying predators, identifying food, identifying cliffs, path-finding etc. I guess there is an argument to be had about what a 'conscious species' but that doesn't seem to be worthwhile. Also, there is a subtle difference between what human intelligence is due to and what the survival benefits of it are. It may have taken sexual selection to jump start it but our intelligence has made us far less vulnerable than we once were (with the exception of the problems we created for ourselves). Humans are rarely eaten by giant cats, for one thing. No species have as high intelligence as humans but lots of species of high intelligence relative to, say, clams. --- Okay, that's a little facetious but tool use has arisen independently throughout the animal again and again, not to mention the less complex behaviors mentioned above. Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?
    4JoshuaZ14y
    Well, having more accurate beliefs only matters if you are an entity intelligence enough to general act on those beliefs. To make an extreme case, consider the hypothetical of say an African Grey Parrot able to do calculus problems. Is that going to actually help it? I would suspect generally not. Or consider a member of a species that gains the accurate belief that it can sexually self-stimulate and then engages in that rather than mating. Here we have what is non-adaptive trait (masturbation is a very complicated trait and so isn't non-adaptive in all cases but one can easily see situations where it seems to be). Or consider a pair of married humans Alice and Bob who have kids that Bob believes are his. Then Bob finds out that his wife had an affair with Bob's brother Charlie and the kids are all really Charlie's. If Bob responds by cutting off support for the kids this is likely non-adaptive. Indeed, one can take it a step further and suppose that Bob and Charlie are identical twins. So that Bob's actions are completely anti-adaptive. Your second point seems more reasonable. However, I'd suggest that intelligence increases the total number of beliefs one has about the world but that it may not increase the likelyhood of beliefs being accurate. Even if it does, the number of incorrect beliefs is likely to increase as well. It isn't clear that the average ratio of correct beliefs to total beliefs is actually increasing (I'm being deliberately vague here in that it would likely be very difficult to measure how many beliefs one has without a lot more thought). A common ape may have no incorrect beliefs even as the common human has many incorrect beliefs. So it isn't clear that intelligence leads to more accurate beliefs. Edit: I agree that overall intelligence has been a helpful trait for human survival over the long haul.
    1thomblake14y
    That seems a likely area of dispute. Having accurate beliefs seems, ceteris paribus, to be better for you than inaccurate beliefs (though I can make up as many counterexamples as you'd like). But that still leaves open the question of whether it's better than no beliefs at all.
    1prase14y
    Dinosaurs weren't a single species, though. Maybe better compare dinosaurs to mammals than to humans.
    1Ganapati14y
    Or we could pick a partciular species of dinaosaur that survived for a few million years and compare to humans. Do you expect any changes to the analysis if we did that?
    1cousin_it14y
    Nitpicking huh? Two can play at that game! 1. Maybe better compare mammals to reptiles than to dinosaurs. 2. Many individual species of dinosaurs have existed for longer than humans have. 3. Dinosaurs as a whole probably didn't go extinct, we see their descendants everyday as birds. Okay, this isn't much to argue about :-)
    3prase14y
    I love nitpicking! 1. Mammals are a clade while reptiles are paraphyletic. Well, dinosaurs are too when birds are excluded, but I would gladly leave the birds in. In any case, dinosaurs win over mammals, so it wasn't probably a good nitpick after all. 2. No dinosaur species did live along with humans, so direct competition didn't take place. 3. I can't find a nit to pick it here.
    0Ganapati14y
    Are you claiming that the human species will last a million years or more and not become extinct before then? What are the grounds for such a claim?
    1Thomas14y
    I don't think one should compare humans and dinos. Maybe mammals and dinos or something like that. Many dinosaurs went extinct during the era, our ancestors where many different "species". Successful enough, that we are still around. As were some dinos which gave birds to Earth. Just a side note,
    4cousin_it14y
    Yep, your view is confused. The optimizing function is implemented in your biology, which is implemented in physics.
    -1Ganapati14y
    In other words, the 'choices' you make are not really choices, but already predetermined, You didn't really choose to be a determinist, you were programmed to select it, once you encountered it.
    4cousin_it14y
    Yep, kind of. But your view of determinism is too depressing :-) My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful. Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.
    -4Ganapati14y
    You couldn't possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-) Of course the output is useful, for the programmer, if any :-) It doesn't appear that regardless of what someone has been programmed to pick, the 'feelings' don't seem to be any different.
    4cousin_it14y
    If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.
    0Ganapati14y
    I program computers successfully too :-)
    -1Ganapati14y
    Just to clarify, in a deterministic universe, there are no "invalid" or "wrong" things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.
    6cousin_it14y
    No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.
    -1Ganapati14y
    Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be. But to avoid the confusion over the use of words I will restate what I said earlier slightly differently. In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.
    2cousin_it14y
    I thought about your argument a bit and I think I understand it better now. Let's unpack it. First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent's belief is correct. So no need to be outside the world to define "correctness". Another matter is verifying the correctness of beliefs if you're within the world. You seem to argue that a verifier can't trust its own conclusion if it knows itself to be a deterministic program. This is debatable - it depends on how you define "trust" - but let's provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn't follow. Could you explain?
    1[anonymous]14y
    So your argument against determinism is that certain things in your brain appear to have "significance" to you, but in a deterministic world that would be impossible? Does this restatement suffice as a reductio ad absurdum, or do I need to dismantle it further?
    0[anonymous]14y
    I'm kind of confused about your argument. Sometimes I get a glimpse of sense in it, but then I notice some corollary that looks just ridiculously wrong and snap back out. Are you saying that the validity of the statement 2+2=4 depends on whether we live in a deterministic universe? That's a rather extreme form of belief relativism; how in the world can anyone hope to convince you that anything is true?
    3Vladimir_Nesov14y
    The only way that choices can be made is by being predetermined (by your decision-making algorithm). Paraphrasing the familiar wordplay, choices that are not predetermined refer to decisions that cannot be made, while the real choices, that can actually be made, are predetermined.
    2Blueberry14y
    I like this phrasing; it makes things very clear. Are you alluding to this quote, or something else?
    1Vladimir_Nesov14y
    Yes.
    0Ganapati14y
    Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the "purpose" of acting as if the future were not already determined and we can choose an optimising function based the possible consequences of different actions?
    8Vladimir_Nesov14y
    Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.
    -3Ganapati14y
    Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.
    2RobinZ14y
    In a deterministic universe, the future is logically implied by the present - but you're in the present. The future isn't fated - if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence - and it isn't predictable - even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality - it's just logically consistent.
    0Ganapati14y
    How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do? Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?
    3RobinZ14y
    I said "counterfactual". Let me use an archetypal example of a free-will hypothetical and query your response: I'm off to the market, now - I'll post the followup in a moment.
    0RobinZ14y
    Now: I imagine most people would say that Alice would receive the fettucini and Alice' the eggplant. I will proceed on this assumption Now suppose that Alice and Alice' are switched at the moment they entered the restaurant. Neither Alice nor Alice' notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice') and universe A' (now containing Alice) can tell, nothing has happened. After the switch, Alice' and Alice are seated, open their menus, and pick their orders. What dishes will Alice' and Alice receive?
    5Blueberry14y
    I'm missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we're assuming A and A' are identical at the beginning, what Alice and Alice' order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them? I'm not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A' are physically identical, but the Alices have different "souls" that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice' has changed the way two identical sets of atoms move around. If this applies to the waiter as well, we can't predict what he'll decide to bring Alice: for all we know he may turn into a leopard, because that's his nature.
    0RobinZ14y
    The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there. And the point of the hypothetical is that the question "what if, counterfactually, Alice ordered the eggplant?" is meaningful - it corresponds to physically switching the molecular formation of Alice with that of Alice' at the appropriate moment.
    3Blueberry14y
    I understand now. Sorry; that wasn't clear from the earlier post. This seems like an intuition pump. You're assuming there is a way to switch the molecular formation of Alice's brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don't know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn't order eggplant). While it's intuitively pleasing to think that there's a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so. Also, suppose I ask "what if Alice ordered the linguine?" Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
    4RobinZ14y
    I know - I didn't phrase it very well. Yes, yes it is. I'm not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don't have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn't make counterfactuals automatically meaningless.
    -4Ganapati14y
    The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.
    5RobinZ14y
    ... Ganapati, the counterfactual does not happen. That's what "counterfactual" means - something which is contrary to fact. However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to "what would happen if I added a glider here to this frame of a Conway's Life game?" has a defined answer, even though no such glider will be present in the original world.
    0Vladimir_Nesov14y
    Why would you be interested in something that can't occur in the real world?
    4RobinZ14y
    In the "free will" case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
    6Vladimir_Nesov14y
    What? Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don't know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.
    1RobinZ14y
    Yes, that's a much better phrasing than mine. (p.s. you realize that I am having an argument with Ganapati about the compatibility of determinism and free will in this thread, right?)
    -4Ganapati14y
    Actually you brought in the counterfactual argument to attempt to explain the significance (or "purpose") of an approach called consequentialism (as opposed to others) in a determined universe.
    5RobinZ14y
    Allow me the privilege of stating my own intentions.
    -1Ganapati14y
    You brought up the counterfactualism example right here, so I assumed it was in response to that post.
    1RobinZ14y
    I'm sorry, do you have an objection to the reading of "counterfactual" elaborated in this thread?
    -3Ganapati14y
    Sorry for the delay in replying. No, I don't have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed. In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it. Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only "evidence" disproving it, if any, lies with the entity believing it, not outside. Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings. Thank you all for your time!

    In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it. [emphasis added]

    Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice' orders eggplant in world A, she gets eggplant. The future is not fixed in advance - it is a function of the present, and your acts in the present create the future.

    There's an old Nozick quote that I found in Daniel Dennett's Elbow Room: "No one has ever announced that because determinism is true thermostats do not control temperature." Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.

    4red7514y
    Correction. Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn't mean that Alice must order fetuccini. I'll elaborate on that further. 1. No one inside DU can precisely predict future. Proof: Let's suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can't change it a bit. How can case B be the case? It can't. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p'=F(p'), but information from future brings entropy, which causes future entropy to increase, so increasing prediction's entropy and so on. Thus, there's cannot be fixed point. QED. 1. Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier. 2. If Alice cannot decide what she likes more, she cannot just say "Oh! I must eat fettucini. It is my fate.", she haven't and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let's call him god) will come down and say to Alice "It's your fate to eat fettucini." (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.
    5RobinZ14y
    I'd like to dispute your usage of "predetermined" there: like "fated", it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that's fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more. If one may determine without knowing Alice's actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.
    2red7514y
    I'm not sure that I sufficiently understand you. "Fated" implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one's fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only. When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn't mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she's in universe A where she's predetermined to eat fettucini, or that she's in universe B where she's to eat eggplant. And her decision process is integral part of each of these universes. Without her decision universe A cannot be universe A. So her decision is crucial part of causal chain. Did I answer your question? Edit: spellcheck.
    3RobinZ14y
    I don't like the connotations, but sure - that's a mathematically consistent definition.
    3RobinZ14y
    P.S. Welcome to Less Wrong! Besides posts linked from the "free will" Wiki page - particularly How An Algorithm Feels From Inside - you may be interested in browsing the various Sequences. The introductory sequence on Map and Territory) is a good place to start. Edit: You may also try browsing the backlinks from posts you like - that's how I originally read through EY's archive.
    1Ganapati14y
    Thanks! I read the links and sequences.
    -4Jack14y
    Not in one day you didn't.
    1Ganapati14y
    I didn't read them in one day and not all of them either. I 'stubled upon' this article on the night of June 1 (GMT + 5.30) and did a bit of research on the site looking to check if my question had been previously raised and answered. In the process I did end up reading a few articles and sequences.
    [-][anonymous]14y00

    Very good article!

    A couple of comments:

    So here, at last, is a rule for which diseases we offer sympathy, and which we offer >condemnation: if giving condemnation instead of sympathy decreases the >incidence of the disease enough to be worth the hurt feelings, condemn; >otherwise, sympathize.

    Almost agreed: It is also important to recheck

    1. Something unpleasant; when you have it, you want to get rid of it to see if reducing the incidence of the disease is actually a worthwhile goal.

    On another note

    Cancer satisfies every one of these criteria

    ... (read more)

    Nicely done. (If I had anything else to add, I would add it.)

    PracticalEthicsNews.com has a few recent posts, a talk, and an interview about whether addiction is a disease. It becomes quite obvious that there is always more at stake in these debates than just the appropriate definition of a medical concept.

    In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance

    Does that mean naturalistic theories of free will like Robert Kane's are false by definition.