Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Thrasymachus 10 December 2016 06:56:27AM 3 points [-]

The line of reply I tend to make here goes something along these lines:

If you're looking at a veil of ignorance, and choosing between world A or world B, it seems you should be also veiled to whether or not you exist. Whatever probability p you have to exist in world B, you have 10000p the probability of existing in world A, because it has 10000x the number of existing people. So opting for world B seems a much better bet.

As an aside, average views tend not to be considered 'goers' by those who specialise in population ethics, but for different reasons:

1) It looks weird with negative numbers. You can make a world where everyone has lives not worth living (-10, say) by adding people whose lives are also not worth living, but not quite as bad (e.g. -9).

2) It also looks pretty weird with positive numbers. Mere addition seems like it should be okay, even if it drags down the average.

3) Inseparability also looks costly. If the averaging is over all beings, then the decision as to whether humanity should euthanise itself becomes intimately sensitive to whether there's an alien species in the next supercluster who are blissfully happy.

Comment author: Thrasymachus 22 April 2016 10:21:07PM -1 points [-]

I'm not sure. It seems important to see whether there is sleepwalk bias is to try and gather a representative sample of predictions/warnings and see how they go. Yet this is pretty hard to do: I can think of examples (like those mentioned in the post) where the disaster was averted, but I can think of others where the disaster did happen despite warnings (I'd argue climate change fits into this category, for example).

Comment author: Vaniver 09 January 2016 04:59:19PM *  3 points [-]

Formatting note: the brackets for links are greedy, so you need to escape them with a \ to avoid a long link.

[Testing] a long [link](https://www.google.com/)

Testing] a long [link

\[Testing\] a short [link](https://www.google.com/)

[Testing] a short link


principally because health is so important for our life and happiness we're less willing to sacrifice it to preserve face (I'd wager it is an even better tax on bs than money).

I agree that I expect people to be more willing to trade money for face than health for face. I think the system is slanted too heavily towards face, though.

I should also point out that this is mostly a demand side problem. If it were only a supply side problem, MetaMed could have won, but it's not--people are interested in face more than they're interested in health (see the example of the outdated brochure that was missing the key medical information, but looked like how a medical brochure is supposed to look).

It'd be surprising for IBM to unleash Watson on a very particular aspect of medicine (therapeutic choice in oncology) if simple methods could beat doctors across most of the board.

My understanding is that this is correct for the simple techniques, but incorrect for the complicated techniques. That is, you're right that a single linear regression can't replace a GP but a NLP engine plus a twenty questions bot plus a causal network probably could. (I unfortunately don't have any primary sources at hand; medical diagnostics is an interest but most of the academic citations I know are all machine diagnostics, since that's what my research was in.)

I should also mention that, from the ML side, the technical innovation of Watson is in the NLP engine. That is, a patient could type English into a keyboard and Watson would mostly understand what they're saying, instead of needing a nurse or doctor to translate the English into the format needed by the diagnostic tool. The main challenge with uptake of the simple techniques historically was that they only did the final computation, but most of the work in diagnostics is collecting the information from the patient. And so if the physcian is 78% accurate and the linear regression is 80% accurate, is it really worth running the numbers for those extra 2%?

From a business standpoint, I think it's obvious why IBM is moving slowly; just like with self-driving cars, the hard problems are primarily legal and social, not technical. Even if Watson has half the error rate of a normal doctor, the legal liability status is very different, just like a self-driving car that has half the error rate of a human driver would result in more lawsuits for the manufacturer, not less. As well, if the end goal is to replace doctors, the right way to do that is imperceptibly hand more and more work over to the machines, not to jump out of the gate with a "screw you, humans!"

I agree this should have happened sooner: that Atul Gwande's surgical checklist happened within living memory is amazing, but it is catching on, and (mildly against hansonian explanations) has been propelled by better outcomes.

So, just like the Hansonian view of Effective Altruism is that it replaces Pretending to Try not with Actually Trying but with Pretending to Actually Try, if there is sufficient pressure to pretend to care about outcomes then we should expect people to move towards better outcomes as their pretending has nonzero effort.

But I think you can look at the historical spread of anesthesia vs. the historical spread of antiseptics to get a sense of the relative importance of physician convenience and patient outcomes. (This is, I think, a point brought up by Gawande.)


I think I agree with your observations about MetaMed's competition but not necessarily about your interpretation. That is, MetaMed could have easily failed for both the reasons that its competition was strong and that its customers weren't willing to pay for its services. I put more weight on the latter because the experience that MetaMed reported was mostly not "X doesn't want to pay $5k for what they can get for free from NICE" but "X agrees that this is worth $100k to them, but would like to only pay me $5k for it." (This could easily be a selection effect issue, where everyone who would choose NICE instead is silent about it.)


However, this data by and large does not exist: much of medicine is still at the stage of working out whether something works generally, rather than delving into differential response and efficacy. It is not clear it ever will - humans might be sufficiently similar to one another that for almost all of them one treatment will be the best. The general success of increasing protocolization in medicine is some further weak evidence of this point.

This is why I'm most optimistic about machine medicine, because it basically means instead of going to a doctor (who is tired / stressed / went to medical school twenty years ago and only sort of keeps up) you go to the interactive NICE protocol bot, which asks you questions / looks at your SNPs and tracked weight/heart rate/steps/sleep/etc. data / calls in a nurse or technician to investigate a specific issue, diagnoses the issue and prescribes treatment, then follows up and adjusts its treatment outcome expectations accordingly.

Comment author: Thrasymachus 26 January 2016 07:46:17PM *  1 point [-]

(Sorry for delay, and thanks for the formatting note.)

My knowledge is not very up to date re. machine medicine, but I did get to play with some of the commercially available systems, and I wasn't hugely impressed. There may be a lot more impressive results yet to be released commercially but (appealing back to my priors) I think I would have heard of it as it would be a gamechanger for global health. Also, if fairly advanced knowledge work of primary care can be done by computer, I'd expect a lot of jobs without the protective features of medicine to be automated.

I agree that machine medicine along the lines you suggest will be superior to human performance, and I anticipate this to be achieved (even if I am right and it hasn't already happened) fairly soon. I think medicine will survive less by the cognitive skill required, but rather though technical facility and social interactions, where machines comparably lag (of course, I anticipate they will steadily get better at this too).

I grant a hansonian account can accomodate this sort of 'guided by efficacy' data I suggest by 'pretending to actually try' considerations, but I would suggest this almost becomes an epicycle: any data which supports medicine being about healing can be explained away by the claim that they're only pretending to be about healing as a circuitous route to signalling. I would say the general ethos of medicine (EBM, profileration of trials) looks like pro tanto reasons in favour about being about healing, and divergence from this (e.g. what happened to semmelweis, other lags) is better explained by doctors being imperfect and selfish, and patients irrational, rather than both parties adeptly following a signalling account.

But I struggle to see what evidence could neatly distinguish between these cases. If you have an idea, I'd be keen to hear it. :)

I agree with the selection worry re. Metamed's customers: they also are assumedly selected from people who modern medicine didn't help, which may also have some effects (not to mention making Metameds task harder, as their pool will be harder to treat than unselected-for-failure cases who see the doctor 'first line'). I'd also (with all respect meant to the staff of Metamed) suggest staff of Metamed may not be the most objective sources of why it failed: I'd guess people would prefer to say their startups failed because of the market or product market fit, rather than 'actually, our product was straight worse than our competitors'.

Comment author: gwern 24 January 2016 07:33:57PM 5 points [-]
Comment author: Thrasymachus 26 January 2016 07:25:11PM 1 point [-]

I put an inline link in the post. Have I missed a norm about puttting related posts I have written in the post more prominently?

Beware surprising and suspicious convergence

14 Thrasymachus 24 January 2016 07:13PM

[Cross]

Imagine this:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

Generally, what is best for one thing is usually not the best for something else, and thus Oliver’s claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver’s claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare.

The rest of this essay tries to better establish what is going on (and going wrong) in cases like this. It is in three parts: the first looks at the ‘statistics’ of convergence - in what circumstances is it surprising to find one object judged best by the lights of two different considerations? The second looks more carefully at the claim of bias: how it might be substantiated, and how it should be taken into consideration. The third returns to the example given above, and discusses the prevalence of this sort of error ‘within’ EA, and what can be done to avoid it.

Varieties of convergence

Imagine two considerations, X and Y, and a field of objects to be considered. For each object, we can score it by how well it performs by the lights of the considerations of X and Y. We can then plot each object on a scatterplot, with each axis assigned to a particular consideration. How could this look?

Convergence

At one extreme, the two considerations are unrelated, and thus the scatterplot shows no association. Knowing how well an object fares by the lights of one consideration tells you nothing about how it fares by the lights of another, and the chance that the object that scores highest on consideration X also scores highest on consideration Y is very low. Call this no convergence.

At the other extreme, considerations are perfectly correlated, and the ‘scatter’ plot has no scatter, but rather a straight line. Knowing how well an object fares by consideration X tells you exactly how well it fares by consideration Y, and the object that scores highest on consideration X is certain to be scored highest on consideration Y. Call this strong convergence.

In most cases, the relationship between two considerations will lie between these extremes: call this weak convergence. One example is there being a general sense of physical fitness, thus how fast one can run and how far one can throw are somewhat correlated. Another would be intelligence: different mental abilities (pitch discrimination, working memory, vocabulary, etc. etc.) all correlate somewhat with one another.

More relevant to effective altruism, there also appears to be weak convergence between different moral theories and different cause areas. What is judged highly by (say) Kantianism tends to be judged highly by Utilitarianism: although there are well-discussed exceptions to this rule, both generally agree that (among many examples) assault, stealing, and lying are bad, whilst kindness, charity, and integrity are good.(1) In similarly broad strokes what is good for (say) global poverty is generally good for the far future, and the same applies for between any two ‘EA’ cause areas.(2)

In cases of weak convergence, points will form some some sort of elliptical scatter, and knowing how an object scores on X does tell you something about how well it scores on Y. If you know that something scores highest for X, your expectation of how it scores for Y should go upwards, and the chance of it also scores highest for Y should increase. However, the absolute likelihood of it being best for X and best for Y remains low, for two main reasons:

divergence

Trade-offs: Although consideration X and Y are generally positively correlated, there might be a negative correlation at the far tail, due to attempts to optimize for X or Y  at disproportionate expense for Y or X. Although in the general population running and throwing will be positively correlated with one another, elite athletes may optimize their training for one or the other, and thus those who specialize in throwing and those who specialize in running diverge. In a similar way, we may think believe there is scope for similar optimization when it comes to charities or cause selection.

regressexp

Chance: (c.f.) Even in cases where there are no trade-offs, as long as the two considerations are somewhat independent, random fluctuations will usually ensure the best by consideration X will not be best by consideration Y. That X and Y only weakly converge implies other factors matter for Y besides X. For the single object that is best for X, there will be many more not best for X (but still very good), and out of this large number of objects it is likely one will do very well on these other factors to end up the best for Y overall. Inspection of most pairs of correlated variables confirms this: Those with higher IQ scores tend to be wealthier, but the very smartest aren’t the very wealthiest (and vice versa), serving fast is good for tennis, but the very fastest servers are not the best players (and vice versa), and so on. Graphically speaking, most scatter plots bulge in an ellipse rather than sharpen to a point.

The following features make a single object scoring highest on two considerations more likely:

  1. The smaller the population of objects. Were the only two options available to OIiver and Eleanor, “Give to the Opera” and “Punch people in the face”, it is unsurprising the former comes top for many considerations.
  2. The strength of their convergence. The closer the correlation moves to collinearity, the less surprising finding out something is best for both. It is less surprising the best at running 100m is best at running 200m, but much more surprising if it transpired they threw discus best too.
  3. The ‘wideness’ of the distribution. The heavier the tails, the more likely a distribution is to be stretched out and ‘sharpen’ to a point, and the less likely bulges either side of the regression line are to be populated. (I owe this to Owen Cotton-Barratt)

In the majority of cases (including those relevant to EA), there is a large population of objects, weak convergence and (pace the often heavy-tailed distributions implicated) it is uncommon for one thing to be best b the lights of two weakly converging considerations.

Proxy measures and prediction

In the case that we have nothing to go on to judge what is good for Y save knowing what is good for X. Our best guess for what is best for Y is what is best for X. Thus the Opera is the best estimate for what is good for human welfare, given only the information that it is best for the arts. In this case, we should expect our best guess to be very likely wrong. Although it is more likely than any similarly narrow alternative (“donations to the opera, or donations to X-factor?”) Its absolute likelihood relative to the rest of the hypothesis space is very low (“donations to the opera, or something else?”).

Of course, we usually have more information available. Why not search directly for what is good for human welfare, instead of looking at what is good for the arts? Often searching for Y directly rather than a weakly converging proxy indicator will do better: if one wants to select a relay team, selecting based on running speed rather than throwing distance looks a better strategy. Thus finding out a particular intervention (say the Against Malaria Foundation) comes top when looking for what is good for human welfare provides much stronger evidence it is best for human welfare than finding out the opera comes top when looking for what is good for a weakly converging consideration.(3)

Pragmatic defeat and Poor Propagation

Eleanor may suspect bias is driving Oliver’s claim on behalf of the opera. The likelihood of the opera being best for both the arts and human welfare is low, even taking their weak convergence into account. The likelihood of bias and motivated cognition colouring Oliver’s judgement is higher, especially if Oliver has antecedent commitments to the opera. Three questions: 1) Does this affect how she should regard Oliver’s arguments? 2) Should she keep talking to Oliver, and, if she does, should she suggest to him he is biased? 3) Is there anything she can do to help ensure she doesn’t make a similar mistake?

Grant Eleanor is right that Oliver is biased. So what? It entails neither he is wrong nor the arguments he offers in support are unsound: he could be biased and right. It would be a case of the genetic fallacy (or perhaps ad hominem) to argue otherwise. Yet this isn’t the whole story: informal ‘fallacies’ are commonly valuable epistemic tools; we should not only attend to the content of arguments offered, but argumentative ‘meta-data’ such as qualities of the arguer as well.(4)

Consider this example. Suppose you are uncertain whether God exists. A friendly local Christian apologist offers the reasons why (in her view) the balance of reason clearly favours Theism over Atheism. You would be unwise to judge the arguments purely ‘on the merits’: for a variety of reasons, the Christian apologist is likely to have slanted the evidence she presents to favour Theism; the impression she will give of where the balance of reason lies will poorly track where the balance of reason actually lies. Even if you find her arguments persuasive, you should at least partly discount this by what you know of the speaker.

In some cases it may be reasonable to dismiss sources ‘out of hand’ due to their bias without engaging on the merits: we may expect the probative value of the reasons they offer, when greatly attenuated by the anticipated bias, to not be worth the risks of systematic error if we mistake the degree of bias (which is, of course, very hard to calculate); alternatively, it might just be a better triage of our limited epistemic resources to ignore partisans and try and find impartial sources to provide us a better view of the balance of reason.

So: should Eleanor stop talking to Oliver about this topic? Often, no. First (or maybe zeroth), there is the chance she is mistaken about Oliver being biased, and further discussion would allow her to find this out. Second, there may be tactical reasons: she may want to persuade third parties to their conversation. Third, she may guess further discussion is the best chance of persuading Oliver, despite the bias he labours under. Fourth, it may still benefit Eleanor: although bias may undermine the strength of reasons Oliver offers, they may still provide her with valuable information. Being too eager to wholly discount what people say based on assessments of bias (which are usually partly informed by object level determinations of various issues) risks entrenching one’s own beliefs.

Another related question is whether it is wise for Eleanor to accuse Oliver of bias. There are some difficulties. Things that may bias are plentiful, thus counter-accusations are easy to make: (“I think you’re biased in favour of the opera due to your prior involvement”/”Well, I think you’re biased against the opera due to your reductionistic and insufficiently holistic conception of the good.”) They are apt to devolve into the personally unpleasant (“You only care about climate change because you are sleeping with an ecologist”) or the passive-aggressive (“I’m getting really concerned that people who disagree with me are offering really bad arguments as a smokescreen for their obvious prejudices”). They can also prove difficult to make headway on. Oliver may assert his commitment was after his good-faith determination that opera really was best for human welfare and the arts. Many, perhaps most, claims like these are mistaken, but it can be hard to tell (or prove) which.(5)

Eleanor may want to keep an ‘internal look out’ to prevent her making a similar mistake to Oliver. One clue is a surprising lack of belief propagation: we change our mind about certain matters, and yet our beliefs about closely related matters remain surprisingly unaltered. In most cases where someone becomes newly convinced of (for example) effective altruism, we predict this should propagate forward and effect profound changes to their judgements on where to best give money or what is the best career for them to pursue. If Eleanor finds in her case that this does not happen, that in her case her becoming newly persuaded by the importance of the far future does not propagate forward to change her career or giving, manifesting instead in a proliferation of ancillary reasons that support her prior behaviour, she should be suspicious of this surprising convergence between what she thought was best then, and what is best now under considerably different lights.

EA examples

Few Effective altruists seriously defend the opera as a leading EA cause. Yet the general problem of endorsing surprising and suspicious convergence remains prevalent. Here are some provocative examples:

  1. The lack of path changes. Pace personal fit, friction, sunk capital, etc. it seems people who select careers on ‘non EA grounds’ often retain them after ‘becoming’ EA, and then provide reasons why (at least for them) persisting in their career is the best option.
  2. The claim that, even granting the overwhelming importance of the far future, it turns out that animal welfare charities are still the best to give to, given their robust benefits, positive flow through effects, and the speculativeness of far future causes.
  3. The claim that, even granting the overwhelming importance of the far future, it turns out that global poverty charities are still the best to give to, given their robust benefits, positive flow through effects, and the speculativeness of far future causes.
  4. Claims from enthusiasts of Cryonics or anti-aging research that this, additional to being good for their desires for an increased lifespan, is also a leading ‘EA’ buy.
  5. A claim on behalf of veganism that it is the best diet for animal welfare and for the environment and for individual health and for taste.

All share similar features: one has prior commitments to a particular cause area or action. One becomes aware of a new consideration which has considerable bearing on these priors. Yet these priors don’t change, and instead ancillary arguments emerge to fight a rearguard action on behalf of these prior commitments - that instead of adjusting these commitments in light of the new consideration, one aims to co-opt the consideration to the service of these prior commitments.

Naturally, that some rationalize doesn’t preclude others being reasonable, and the presence of suspicious patterns of belief doesn’t make them unwarranted. One may (for example) work in global poverty due to denying the case for the far future (via a person affecting view, among many other possibilities) or aver there are even stronger considerations in favour (perhaps an emphasis on moral uncertainty and peer disagreement and therefore counting the much stronger moral consensus around stopping tropical disease over (e.g.) doing research into AI risk as the decisive consideration).

Also, for weaker claims, convergence is much less surprising. Were one to say on behalf of veganism: “It is best for animal welfare, but also generally better for the environment and personal health than carnivorous diets. Granted, it does worse on taste, but it is clearly superior all things considered”, this seems much less suspect (and also much more true) than the claim it is best by all of these metrics. It would be surprising if the optimal diet for personal health did not include at least some animal products.

Caveats aside, though, these lines of argument are suspect, and further inspection deepens these suspicions. In sketch, one first points to some benefits the prior commitment has by the lights of the new consideration (e.g. promoting animal welfare promotes antispeciesism, which is likely to make the far future trajectory go better), and second remarks about how speculative searching directly on the new consideration is (e.g. it is very hard to work out what we can do now which will benefit the far future).(6)

That the argument tends to end here is suggestive of motivated stopping. For although the object level benefits of (say) global poverty are not speculative, their putative flow-through benefits on the far future are speculative. Yet work to show that this is nonetheless less speculative than efforts to ‘directly’ work on the far future is left undone.(7) Similarly, even if it is the case the best way to make the far future go better is to push on a proxy indicator, which one? Work on why (e.g.) animal welfare is the strongest proxy out of competitors also tends to be left undone.(8) As a further black mark, it is suspect that those maintaining global poverty is the best proxy almost exclusively have prior commitments to global poverty causes, mutatis mutandis animal welfare, and so on.

We at least have some grasp of what features of (e.g.) animal welfare interventions make them good for the far future. If this (putatively) was the main value of animal welfare interventions due to the overwhelming importance of the far future, it would seem wise to try and pick interventions which maximize these features. So we come to a recursion: within animal welfare interventions, ‘object level’ and ‘far future’ benefits would be expected to only weakly converge. Yet (surprisingly and suspiciously) the animal welfare interventions recommended by the lights of the far future are usually the same as those recommended on ‘object level’ grounds.

Conclusion

If Oliver were biased, he would be far from alone. Most of us are (like it or not) at least somewhat partisan, and our convictions are in part motivated by extra-epistemic reasons: be it vested interests, maintaining certain relationships, group affiliations, etc. In pursuit of these ends we defend our beliefs against all considerations brought to bear against them. Few beliefs are indefatigable by the lights of any reasonable opinion, and few policy prescriptions are panaceas. Yet all of ours are.

It is unsurprising the same problems emerge within effective altruism: a particular case of ‘pretending to actually try’ is ‘pretending to take actually arguments seriously’.(9)These problems seem prevalent across the entirety of EA: that I couldn’t come up with good examples for meta or far future cause areas is probably explained by either bias on my part or a selection effect: were these things less esoteric, they would err more often.(10)

There’s no easy ‘in house’ solution, but I repeat my recommendations to Eleanor: as a rule, maintaining dialogue, presuming good faith, engaging on the merits, and listening to others seems a better strategy, even if we think bias is endemic. It is also worth emphasizing the broad (albeit weak) convergence between cause areas is fertile common ground, and a promising area for moral trade. Although it is unlikely that the best thing by the lights of one cause area is the best thing by the lights of another, it is pretty likely it will be pretty good. Thus most activities by EAs in a particular field should carry broad approbation and support from those working in others.

I come before you a sinner too. I made exactly the same sorts of suspicious arguments myself on behalf of global poverty. I’m also fairly confident my decision to stay in medicine doesn’t really track the merits either – but I may well end up a beneficiary of moral luck. I’m loath to accuse particular individuals of making the mistakes I identify here. But, insofar as readers think this may apply to them, I urge them to think again.(11)

Notes

  1. We may wonder why this is the case: the content of the different moral theories are pretty alien to one another (compare universalizable imperatives, proper functioning, and pleasurable experiences). I suggest the mechanism is implicit selection by folk or ‘commonsense’ morality. Normative theories are evaluated at least in part by how well they accord to our common moral intuitions, and they lose plausibility commensurate to how much violence they do to them. Although cases where a particular normative theory apparently diverges from common sense morality are well discussed (consider Kantianism and the inquiring murder, or Utilitarianism and the backpacker), moral theories that routinely contravene our moral intuitions are non-starters, and thus those that survive to be seriously considered somewhat converge with common moral intuitions, and therefore one another.
  2. There may be some asymmetry: on the object level we may anticipate the ‘flow forward’ effects of global health on x-risk to be greater than the ‘flow back’ benefits of x-risk work on global poverty. However (I owe this to Carl Shulman) the object level benefits are probably much smaller than more symmetrical ‘second order’ benefits, like shared infrastructure, communication and cross-pollination, shared expertise on common issues (e.g. tax and giving, career advice).
  3. But not always. Some things are so hard to estimate directly, and using proxy measures can do better. The key question is whether the correlation between our outcome estimates and the true values is greater than that between outcome and (estimates of) proxy measure outcome. If so, one should use direct estimation; if not, then the proxy measure. There may also be opportunities to use both sources of information in a combined model.
  4. One example I owe to Stefan Schubert: we generally take the fact someone says something as evidence it is true. Pointing out relevant ‘ad hominem’ facts (like bias) may defeat this presumption.
  5. Population data – epistemic epidemiology, if you will – may help. If we find that people who were previously committed to the operas much more commonly end up claiming the opera is best for human welfare than than other groups, this is suggestive of bias.

    A subsequent problem is how to disentangle bias from expertise or privileged access. Oliver could suggest that those involved in the opera gain ‘insider knowledge’, and their epistemically superior position explains why they disproportionately claim the opera is best for human welfare.

    Some features can help distinguish between bias and privileged access, between insider knowledge and insider beliefs. We might be able to look at related areas, and see if ‘insiders’ have superior performance which an insider knowledge account may predict (if insiders correctly anticipate movements in consensus, this is suggestive they have an edge). Another possibility is to look at migration of beliefs. If there is ‘cognitive tropism’, where better cognizers tend to move from the opera to AMF, this is evidence against donating to the opera in general and the claim of privileged access among opera-supporters in particular. Another is to look at ordering: if the population of those ‘exposed’ to the opera first and then considerations around human welfare are more likely to make Oliver’s claims than those exposed in reverse order, this is suggestive of bias on one side or the other.

  6. Although I restrict myself to ‘meta’-level concerns, I can’t help but suggest the ‘object level’ case for these things looks about as shaky as Oliver’s object level claims on behalf of the opera. In the same way we could question: “I grant that the arts is the an important aspect of human welfare, but is it the most important (compared to, say, avoiding preventable death and disability)?” or “What makes you so confident donations to the opera are the best for the arts - why not literature? or perhaps some less exoteric music?” We can post similarly tricky questions to proponents of 2-4: “I grant that (e.g.) antispeciesism is an important aspect of making the far future go well, but is it the most important aspect (compared to, say, extinction risks)?” or “What makes you so confident (e.g) cryonics is the best way of ensuring greater care for the future - what about militating for that directly? Or maybe philosophical research into whether this is the correct view in the first place?”

    It may well be that there are convincing answers to the object level questions, but I have struggled to find them. And, in honesty, I find the lack of public facing arguments in itself cause for suspicion.

  7. At least, undone insofar as I have seen. I welcome correction in the comments.
  8. The only work I could find taking this sort of approach is this.
  9. There is a tension between ‘taking arguments seriously’ and ‘deferring to common sense’. Effective altruism only weakly converges with common sense morality, and thus we should expect their recommendations to diverge. On the other hand, that something lies far from common sense morality is a pro tanto reason to reject it. This is better acknowledged openly: “I think the best action by the lights of EA is to research wild animal suffering, but all things considered I will do something else, as how outlandish this is by common sense morality is a strong reason against it”. (There are, of course, also tactical reasons that may speak against saying or doing very strange things.)
  10. This ‘esoteric selection effect’ may also undermine social epistemological arguments between cause areas:

    It seems to me that more people move from global poverty to far future causes than people move in the opposite direction (I suspect, but am less sure, the same applies between animal welfare and the far future). It also seems to me that (with many exceptions) far future EAs are generally better informed and cleverer than global poverty EAs.

    I don’t have great confidence in this assessment, but suppose I am right. This could be adduced as evidence in favour of far future causes: if the balance of reason favoured the far future over global poverty, this would explain the unbalanced migration and ‘cognitive tropism’ between the cause areas.

    But another plausible account explains this by selection. Global poverty causes are much more widely known that far future causes. Thus people who are ‘susceptible’ to be persuaded by far future causes were often previously persuaded by global poverty causes, whilst the reverse is not true - those susceptible to global poverty causes are unlikely to encounter far future causes first. Further, as far future causes are more esoteric, they will be disproportionately available to better-informed people. Thus, even if the balance of reason was against the far future, we would still see these trends and patterns of believers.

    I am generally a fan of equal-weight views, and of being deferential to group or expert opinion. However, selection effects like these make deriving the balance of reason from the pattern of belief deeply perplexing.

  11. Thanks to Stefan Schubert, Carl Shulman, Amanda MacAskill, Owen Cotton-Barratt and Pablo Stafforini for extensive feedback and advice. Their kind assistance should not be construed as either endorsement endorsement of the content, nor responsibility for any errors.
Comment author: Vaniver 31 December 2015 07:44:06PM *  6 points [-]

Three main sources. (But first the disclaimer About Isn't About You seems relevant--that is, even if medicine is all a sham (which I don't believe), participating in the medical system isn't necessarily a black mark on you personally.)

First is Robin Hanson's summary on the literature on health economics. The medicine tag on Robin's blog has a lot, but a good place to start is probably Cut Medicine in Half and Medicine as Scandal followed by Farm and Pet Medicine and Dog vs. Cat Medicine. To summarize it shortly, it looks like medical spending is driven by demand effects (we care so we spend to show we care) rather than supply effects (medicine is better so we consume more) or efficacy (we don't keep good records of how effective various doctors are). His proposal for how to fund medicine shows what he thinks a more sane system would look like. (As 'cut medicine in half' suggests, he doesn't think the average medical spending has a non-positive effect, but that the marginal medical spending does, to a very deep degree.)

Second is the efficiency literature on medicine. This is statisticians and efficiency experts and so on trying to apply standard industrial techniques to medicine and getting pushback that looks ludicrous to me. For example, human diagnosticians perform at the level or worse than simple algorithms (I'm talking linear regressions, here, not even neural networks or decision trees or so on), and this has been known in the efficiency literature for well over fifty years. Only in rare cases does this actually get implemented in practice (for example, a flowchart for dealing with heart attacks in emergency rooms was popularized a few years back and seems to have had widespread acceptance). It's kind of horrifying to realize that our society is smarter about, say, streamlining the production of cars than we are streamlining the production of health, especially given the truly horrifying scale of medical errors. Stories like Semmelweis and the difficulty getting doctors to wash their hands between patients further expand this view.

Third is from 'the other side'; my father was a pastor and thus spent quite some time with dying people and their families. His experience, which is echoed by Yvain in Who By Very Slow Decay and seems to be the common opinion among end-of-life professionals in general, is that the person receiving end-of-life care generally doesn't want it and would rather die in peace, and the people around them insist that they get it (mostly so that they don't seem heartless). As Yvain puts it:

Robin Hanson sometimes writes about how health care is a form of signaling, trying to spend money to show you care about someone else. I think he’s wrong in the general case – most people pay their own health insurance – but I think he’s spot on in the case of families caring for their elderly relatives. The hospital lawyer mentioned during orientation that it never fails that the family members who live in the area and have spent lots of time with their mother/father/grandparent over the past few years are willing to let them go, but someone from 2000 miles away flies in at the last second and makes ostentatious demands that EVERYTHING POSSIBLE must be done for the patient.

Once you really grok that a huge amount of medical spending is useless torture, and if you are familiar with what it looks like to design a system to achieve an end, it becomes impossible to see the point of our medical system as healing people.

[edit]And look at today's Hanson post!

Comment author: Thrasymachus 09 January 2016 03:23:18PM *  6 points [-]

I broadly differ with the hansonian take on medicine. I think metamed failed not because it offered more effective healing but went bust because medicine doesn't really demand healing; but rather that medicine is about healing, generally does this pretty well, and Metamed was unable to provide a significant edge in performance over standard medicine. (I should note I am a doctor, albeit a somewhat contrarian one. I wrote the 80k careers guide on medicine).


I think medicine is generally less fertile ground for hansonian signalling accounts, principally because health is so important for our life and happiness we're less willing to sacrifice it to preserve face (I'd wager it is an even better tax on bs than money). If the efficacy of marginal health spending is near zero in rich countries, that seems evidence in support of, 'medicine is really about healing' - we want to live healthily so much we chase the returns curve all the way to zero!

There are all manner of ways in which western world medicine does badly, but I think sometimes the faults are overblown, and the remainder are best explained by human failings rather than medicine being a sham practice:

1) My understanding of the algorithms for diagnosis is that although linear regressions and simple methods can beat humans at very precise diagnostic questions (e.g. 'Given these factors of a patient who is mentally ill, what is their likelihood of committing suicide?), humans still have better performance in messier (and more realistic) situations. It'd be surprising for IBM to unleash Watson on a very particular aspect of medicine (therapeutic choice in oncology) if simple methods could beat doctors across most of the board.

(I'd be very interested to see primary sources if my conviction is mistaken)

2) Medicine has become steadily more and more protocolized, and clinical decision rules, standard operating procedures and standards of care are proliferating rapidly. I agree this should have happened sooner: that Atul Gwande's surgical checklist happened within living memory is amazing, but it is catching on, and (mildly against hansonian explanations) has been propelled by better outcomes.

I can't speak for the US, but there are clear protocols in the UK about initial emergency management of heart attacks. Indeed, take a gander at the UK's 'NICE Pathways' which gives a flow chart on how to act in all circumstances where a heart attack is suspected.

3) I agree that the lack of efficacy information about individual doctors isn't great. Reliable data on this is far from trivial to acquire however, and that with doctors understandable self-interest not to be too closely monitored seems to explain this lacuna as well as the hansonian story. (Patients tend to want to know this information if it is available, which doesn't fit well with them colluding with their doctors and family in a medical ritual unconnected to their survival).

4) Over-treatment is rife, but the US is generally held up as an anti-examplar of this fault, and (at least judging by the anecdotes) medics in the UK are better (albeit still far from perfect) at flogging the patient to death with medical torture. Outside of this zero or negative margin, performance is better: it is unclear how much is attributable to medicine, but life expectancy, disease free life expectancy, and age-standardized mortality rates for most conditions are declining.


Now, why Metamed failed (I appreciate one should get basically no credit for predicting a start up will fail given this is the usual outcome, but I called it a long time ago):

Metamed's business model relied on there being a lot of low hanging fruit to pluck. That in many cases, a diagnosis or treatment would elude the clinician because they weren't appraised of the most recent evidence, were only able to deal in generalities rather than personalized recommendations, or that they just were less adept at synthesizing the evidence available.

If it were Metamed versus the average doctor - the one who spends next-to-no time reading academic papers, who is incredibly busy, stressed out, and so on, you'd be forgiven for thinking that metamed has an edge. However, medics (especially generalists) have long realized they have no hope of keeping abreast of a large medical literature on their own. Enter division of labour: they instead commission the relevant experts to survey, aggregate and summarize the current state of the evidence base, leaving them the simpler task of applying in their practice. To make sure it was up to date, they'd commission the experts to repeat this fairly often.

I mentioned NICE (National Institute of Clinical Excellence) earlier. They're a body in the UK who are responsible (inter alia) for deciding when drugs and treatments get funded on the NHS. They spend a vast amount of time on evidence synthesis and meta-analysis. To see what sort of work this produces google 'NICE {condition}'. An example for depression is here. Although I think the UK is world leading in this aspect, there are similar bodies in similar countries in other countries, as well as commercial organizations (e.g. Uptodate.)

Against this, Metamed never had any edge: they didn't have groups of subject matter experts to call upon for each condition or treatment in question, nor (despite a lot of mathsy inclination amongst them) did they by and large have parity in terms of meta-analysis, evidence synthesis and related skills. They were also outmatched in terms of quantity of man hours that could be deployed, and the great headstart NICE et al. already had. When their website was still up I looked at some of their example reports, and my view was they were significantly inferior to what you could get via NICE (for free!) or Uptodate or similar services for their lower fees.

MEtamed might have had a hope if in the course of producing these general evidence summaries, a lot of fine-grained data was being aggregated out to produce something 'one size fits all' - their edge would be going back to the original data to find out that although generally drug X is good for a condition, in ones particular case in virtue of age, genotype, or whatever else, drug Y is superior.

However, this data by and large does not exist: much of medicine is still at the stage of working out whether something works generally, rather than delving into differential response and efficacy. It is not clear it ever will - humans might be sufficiently similar to one another that for almost all of them one treatment will be the best. The general success of increasing protocolization in medicine is some further weak evidence of this point.


I generally adduce meta-med as an example of rationalist overconfidence. That insurgent Bayesians can just trounce relevant professionals in terms of what they purport to do thanks to signalling etc. But again, given the expectation was for it to fail (as most start ups do), this doesn't provide evidence. If it had succeeded, I'd have updated much more strongly in the magic of rationalism meaning you can win and the world being generally dysfunctional.

Comment author: Thrasymachus 26 December 2015 12:05:22AM 4 points [-]

Congratulations on doing this sort of careful self-analysis. I'd like to recommend a further improvement.

Pre-register/Publish your intentions for these trials and analysis in advance

The file-drawer problem is well-known, as are the risks of post-hoc changes in analysis. Publishing what data you are gathering and the analyses you will perform on it reassure more skeptical people against both of these worries, and seems pretty easy to do.

Log-normal Lamentations

12 Thrasymachus 19 May 2015 09:12PM

[Morose. Also very roughly drafted.]

Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.

There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1

normal

Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.

Look at our thick-tailed works, ye average, and despair! 2

One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.

Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.

Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.

A shattered visage

Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.

So there are a few ways an Effective Altruist mindset can depress our egos:

  1. It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
  2. ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
  3. (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
  4. Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.

What remains besides

I haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.

In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4

Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5

‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6

Notes:

  1. As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).
  2. I wonder how much this post is a monument to the grasping vaingloriousness of my character…
  3. Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.
  4. Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.
  5. Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.
  6. cf. Max Ehrmann put it well:

    … If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.

    Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…

Comment author: estimator 03 April 2015 06:57:08PM *  12 points [-]

I was kinda surprised to see IQ as an external factor; my impression is that internal vs. external locus of control is actually personality traits vs. circumstances and environment, and IQ obviously falls into the first category.

If you consider IQ and mental health external factors, what are the internal factors, then? Willpower? But willpower is determined by the brain structure just as IQ and mental health and other personality traits.

Basically, if you assign everything to the "external" category, so that the "internal" is an empty set (or almost empty), then one's success is determined by "external" factors. No surprise here.

Comment author: Thrasymachus 05 April 2015 12:58:59AM 2 points [-]

[I've seen your follow-up post on discussion. I thought it would be best to reply to both here.]

It may be that everything is determined by prior events all the way to the big bang. So there's no 'internal willer' isolated from previous events that can steer us one way or another. But we can keep talking about 'internal' and 'external' loci of control on a compatibilist view of free will (which I'd guess is the common view, including amongst those affirming an internal locus of control).

On this sort of view, internal factors are just those our choices can change - external factors, those which our choices cannot. If I want to run faster, how much time I spend training is an internal factor: it influences how fast I can run, and I can choose (in the compatibilist sense) how much time I spend training. If I have a dense hemiparesis secondary to a birth injury, that's an external factor - it also influences how fast I can run (indeed, whether I can run at all), and can't choose whether or not to have a hemiparesis.

So I take those with an internal locus of control to think that - in the main - the outcomes that matter are mainly sensitive to factors that in turn are sensitive to our choices (how hard I work, how long I practice, etc.), whilst those with an external locus of control say that these things are primarily determined by factors outside of that person's control.

It seems clear to me that IQ should be in the 'external factors' camp: IQ seems to be set early in life, has a large heritable component, and the non heritable bit is likely due to environmental things that I also can't change for myself, either at the time or retroactively. The failure of brain training programs suggests that you can't improve your IQ by any feat of effort. And we know it has all sorts of influences on how our lives turn out. If I have (due to factors outside my control) an IQ more than one standard deviation below the mean, I won't be able to become a doctor, or a physicist (or, indeed, joining the US armed services) - no matter what else I do. Mutatis mutandis cases where it might not serve as a strict bar but a variable handicap (c.f. evidence that the beneficial effects of IQ have no clear ceiling).

The alternative account you propose for demarcating 'external' versus 'internal' factors - internal factors are those causally distal to your brain's neural output - looks too broad: all internal factors need to be downstream of our neural output, but that isn't sufficient. The hemiparesis case I allude to above would be one example - that I can't move one side of my body is due to my neural output, but that is because of this insult which wasn't due to my neural output. I think the same applies for other cases of brain damage and particular types of mental illness: indeed, this is implicitly recognised by the criminal justice system.

(I've added remarks to this effect in the body of the post - thanks for this comment!)

Against the internal locus of control

6 Thrasymachus 03 April 2015 05:48PM

What do you think about these pairs of statements?

  1. People's misfortunes result from the mistakes they make
  2. Many of the unhappy things in people's lives are partly due to bad luck
  1. In the long run, people get the respect they deserve in this world.
  2. Unfortunately, an individual's worth often passes unrecognized no matter how hard he tries.
  1. Becoming a success is a matter of hard work; luck has little or nothing to do with it.
  2. Getting a good job mainly depends on being in the right place at the right time.

They have a similar theme: the first statement suggests that an outcome (misfortune, respect, or a good job) for a person are the result of their own action or volition. The second assigns the outcome to some external factor like bad luck.(1)

People who tend to think their own attitudes or efforts can control what happens to them are said to have an internal locus of control, those who don't, an external locus of control. (Call them 'internals' and 'externals' for short).

Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail.(2) If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.

Yet internal versus external is not just a matter of taste, but a factual claim about the world. Do people, in general, get what their actions deserve, or is it generally thanks to matters outside their control?

Why the external view is right

Here are some reasons in favour of an external view:(3)

  1. Global income inequality is marked (e.g. someone in the bottom 10% of the US population by income is still richer than two thirds of the population - more here). The main predictor of your income is country of birth, it is thought to explain around 60% of the variance: not only more important than any other factor, but more important than all other factors put together.
  2. Of course, the 'remaining' 40% might not be solely internal factors either. Another external factor we could put in would be parental class. Include that, and the two factors explain 80% of variance in income.
  3. Even conditional on being born in the right country (and to the right class), success may still not be a matter of personal volition. One robust predictor of success (grades in school, job performance, income, and so on) is IQ. The precise determinants of IQ remain controversial, it is known to be highly heritable, and the 'non-genetic' factors of IQ proposed (early childhood environment, intra-uterine environment, etc.) are similarly outside one's locus of control.

On cursory examination the contours of how our lives are turned out are set by factors outside our control, merely by where we are born and who our parents are. Even after this we know various predictors, similarly outside (or mostly outside) of our control, that exert their effects on how our lives turn out: IQ is one, but we could throw in personality traits, mental health, height, attractiveness, etc.

So the answer to 'What determined how I turned out, compared to everyone else on the planet?', the answer surely has to by primarily about external factors, and our internal drive or will is relegated a long way down the list. Even if we want to look at narrower questions, like "What has made me turn out the way I am, versus all the other people who were likewise born in rich countries in comfortable circumstances?" It is still unclear whether the locus of control resides within our will: perhaps a combination of our IQ, height, gender, race, risk of mental illness and so on will still do the bulk of the explanatory work.(4)

Bringing the true and the prudentially rational together again

If it is the case that folks with an internal locus of control succeed more, yet also the external view being generally closer to the truth of the matter, this is unfortunate. What is true and what is prudentially rational seem to be diverging, such that it might be in your interests not to know about the evidence in support of an external locus of control view, as deluding yourself about an internal locus of control view would lead to your greater success.

Yet it is generally better not to believe falsehoods. Further, the internal view may have some costs. One possibility is fueling a just world fallacy: if one thinks that outcomes are generally internally controlled, then a corollary is when bad things happen to someone or they fail at something, it was primarily their fault rather than them being a victim of circumstance.

So what next? Perhaps the right view is to say that: although most important things are outside our control, not everything is. Insofar as we do the best with what things we can control, we make our lives go better. And the scope of internal factors - albeit conditional on being a rich westerner etc. - may be quite large: it might determine whether you get through medical school, publish a paper, or put in enough work to do justice to your talents. All are worth doing.

Acknowledgements

Inspired by Amanda MacAskill's remarks, and in partial response of Peter McIntyre. Neither are responsible for what I've written, and the former's agreement or the latter's disagreement with this post shouldn't be assumed.

 

1) Some ground-clearing: free will can begin to loom large here - after all, maybe my actions are just a result of my brain's particular physical state, and my brain's particular physical state at t depends on it's state at t-1, and so on and so forth all the way to the big bang. If so, there is no 'internal willer' for my internal locus of control to reside.

However, even if that is so, we can parse things in a compatibilist way: 'internal' factors are those which my choices can affect; external factors are those which my choices cannot affect. "Time spent training" is an internal factor as to how fast I can run, as (borrowing Hume), if I wanted to spend more time training, I could spend more time training, and vice versa. In contrast, "Hemiparesis secondary to birth injury" is an external factor, as I had no control over whether it happened to me, and no means of reversing it now. So the first set of answers imply support for the results of our choices being more important; whilst the second set assign more weight to things 'outside our control'.

2) In fairness, there's a pretty good story as to why there should be 'forward action': in the cases where outcome is a mix of 'luck' factors (which are a given to anyone), and 'volitional ones' (which are malleable), people inclined to think the internal ones matter a lot will work hard at them, and so will do better when this is mixed in with the external determinants.

3) This ignores edge cases where we can clearly see the external factors dominate - e.g. getting childhood leukaemia, getting struck by lightning etc. - I guess sensible proponents of an internal locus of control would say that there will be cases like this, but for most people, in most cases, their destiny is in their hands. Hence I focus on population level factors.

4) Ironically, one may wonder to what extent having an internal versus external view is itself an external factor.

View more: Next