Compartmentalizing: Effective Altruism and Abortion
Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.
Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.
A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.
Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.
Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180 – and I think this is true of many people:
- Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
- Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
- Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?
Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.
Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.
Moral Uncertainty
In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.
This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.
And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.
One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.
Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table
In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.
However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.
Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.
Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.
Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.
We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.
Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.
We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:
- If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
- If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247
Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.
Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.
Other EA concepts and their applications to this issue
Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.
Not really people
One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.
Not people yet
A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.
Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.
Replaceability
Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.
The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.
If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.
Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.
Autonomy
Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.
Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.
Deontology
An argument often used on the opposite side – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.
I didn’t ask for this
Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.
However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.
Infanticide is okay too
A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.
Moral Universalism
A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.
This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.
I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.
May we discuss this?
Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?
Nothing to do with you
A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.
Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:
- EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
- EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
- EAs have opinions on the far future, yet live in the present
Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.
Too controversial
We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.
Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.
However, the EA movement is no stranger to controversy.
- There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
- There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.
Not worthy of discussion
Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.
However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.
Conclusion
People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.
- There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality.
- I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute.
Steelmanning Inefficiency
When considering writing a hypothetical apostasy or steelmanning an opinion I disagreed with, I looked around for something worthwhile, both for me to write and others to read. Yvain/Scott has already steelmanned Time Cube, which cannot be beaten as an intellectual challenge, but probably didn't teach us much of general use (except in interesting dinner parties). I wanted something hard, but potentially instructive.
So I decided to steelman one of the anti-sacred cows (sacred anti-cows?) of this community, namely inefficiency. It was interesting to find that it was a little easier than I thought; there are a lot of arguments already out there (though they generally don't come out explicitly in favour of "inefficiency"), it was a question of collecting them, stretching them beyond their domains of validity, and adding a few rhetorical tricks.
The strongest argument
Let's start strong: efficiency is the single most dangerous thing in the entire universe. Then we can work down from that:
A superintelligent AI could go out of control and optimise the universe in ways that are contrary to human survival. Some people are very worried about this; you may have encountered them at some point. One big problem seems to be that there is no such thing as a "reduced impact AI": if we give a superintelligent AI a seemingly innocuous goal such as "create more paperclips", then it would turn the entire universe into paperclips. Even if it had a more limited goal such as "create X paperclips", then it would turn the entire universe into redundant paperclips, methods for counting the paperclips it has, or methods for defending the paperclips it has - all because these massive transformations allow it to squeeze just a little bit more expected utility from the universe.
The problem is one of efficiency: of always choosing the maximal outcome. The problem would go away if the AI could be content with almost accomplishing its goal, or of being almost certain that its goal was accomplished. Under those circumstances, "create more paperclips" could be a viable goal. It's only because a self-modifying AI drives towards efficiency, that we have the problem in the first place. If the AI accepted being inefficient in its actions, even a little bit, the world would be much safer.
So the first strike against efficiency is that it's the most likely thing to destroy the world, humanity, and everything of worth and value in the universe. This could possibly give us some pause.
Prediction is hard, especially of medicine
Summary: medical progress has been much slower than even recently predicted.
In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.
Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.
The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.
Prediction market sequence requested
Related to: Eliezer's Sequences and Mainstream Academia, Intellectual insularity and productivity, I Stand by the Sequences, Why don't people like markets?
Looking at some of the more recent arguments against them showing up in discussions I've been quite disappointed, they seem betray a sort of lack of background knowledge or opinions built up from a bottom line of "markets are baaad therefore prediction markets are baaad". The casual arguments for them are lacking as well. I will say the same of other discussions on economic, since it is apparently suddenly too mind-killing or too political to talk about markets and similar things at all. We didn't use to have tribal alerts flying up in our brains discussing such matters.
The Overcoming Bias community started with an assumption of certain kinds of background knowledge, this included economics and things like game theory. In the early days of LessWrong/Overcoming Bias Eliezer did a whole sequnece on filling in people on Quantum mechanics which despite his claims to the contrary doesn't seem that vital (if still important).
We now have a different demographic that we used to. Not only that, we now have young people basically using the sequences as their primary source for education on matters of human rationality, quite different from the autodidacts exploring the literature on their own terms who where common in previous years. We've recognized this to a certain extent. We wrote a series of introductory sequences and articles to fill in such background knowledge explicitly such as Yvain's recent one on Game Theory. Also part of the reason we now have a norm of more citations that EY originally did is to give study and research aids to people. Indeed I think adding comments to old articles featuring more citations or editing those in would be wise so as to avoid misconceptions.
I think we need several sequences on economics, and a good one to start would be one systematically investigating prediction markets. To a certain extent just reading Robin Hanson's relevant posts on this topic would do much the same, but unfortunately we don't have an organized series of sequences by him (beyond the tags he uses on his articles). I still hope Karmakaiser or someone else will one day undertake a project of writing up summary articles that organize links to RH's posts into sequences so new members will read them as well.
I'd write these myself but I just don't have a good background in what works and studies influence the positions of early key LW authors on economics and its relevance to rationality. I'm also only beginning my studies in that area since my background is in the hard sciences with only some half-serious opinions formed from Moldbuggian insights and 20th century social science.
Credence Calibration Icebreaker Game
The Aussie mega-meetup took place this past weekend. For it, a new kind of icebreaker was needed: one which is was not merely fun and sociable, but also instilled with the Way. Thus was the Credence Calibration Icebreaker forged.
A marriage of the credence game and the classic icebreaker, ‘Say three things about yourself, one of them a lie’, the game allows players to learn about each other, test their ability to deceive and detect deception, and discover just how calibrated they are.
How to play
Playing instructions here: docx pdf. Scoring spreadsheet.
Each turn a player makes three statements about themselves. One and only one the statement must be intentionally untrue. All others players assign probabilities of being false to each statement. These probability sum to 1: P(A’) + P(B’) + P(C’ ) = 1. The game is scored in the same manner as the credence game, but with reference to 33% rather than 50%.
The way we played it, a player would reveal which was the lie immediately after everyone else had assigned probabilities. The immediate feedback is more fun and allows players to recalibrate as they learn about their performance. Revealing which statements were lies at the end would require reminding everyone what the other statements were.
Many meetup groups have played the Aumann agree game where groups collectively assign credences to a collection of statements, however that game requires a collection of statements to be collected in advance. Once played, new statements must be collected for a new game. The credence calibration icebreaker has the advantage that players generate the statements allowing for easy replay.
Improvements
Restrictions should be placed on the nature of the lies in order to control which skills are tested. We played without restrictions and most players generated a lie by altering a minor detail of a true statement which didn’t affect its plausibility, e.g. ‘My father’s brain is frozen’1 vs. ‘My uncle’s brain is frozen’. This resulted in the game being less about appraising the plausibility of statements and more about detecting deception by tells and other clues.
Following the original icebreaker game, three statements were used. Reducing the number of statements to two would have the following benefits:
- The game is currently data entry intensive, requiring two numbers per question per player to be entered. Two statements would halve this number.
- Assigning probabilities of falsehood is counter-intuitive to many, using two statements would allow for the typical direct assignment of truth.
- People find generating three statements difficult, two statements would reduce the effort.
Statistics
Various statistics are computed in the scoring spreadsheet. Results from our game showed a high correlation between number correct and score, 0.72, and that players improved over the course of the game thanks to diminishing overconfidence.
1. True statement. As was 'I have three kidneys'.
Moving on from Cognito Mentoring
Back in December 2013, Jonah Sinick and I launched Cognito Mentoring, an advising service for intellectually curious students. Our goal was to improve the quality of learning, productivity, and life choices of the student population at large, and we chose to focus on intellectually curious students because of their greater potential as well as our greater ability to relate with that population. We began by offering free personalized advising. Jonah announced the launch in a LessWrong post, hoping to attract the attention of LessWrong's intellectually curious readership.
Since then, we feel we've done a fair amount, with a lot of help from LessWrong. We've published a few dozen blog posts and have an information wiki. Slightly under a hundred people contacted us asking us for advice (many from LessWrong), and we had substantive interactions with over 50 of them. As our reviews from students and parents suggest, we've made a good impression and have had a positive impact on many of the people we've advised. We're proud of what we've accomplished and grateful for the support and constructive criticism we've received on LessWrong.
However, what we've learned in the last few months has led us to the conclusion that Cognito Mentoring is not ripe for being a full-time work opportunity for the two of us.
For the last few months, we've eschewed regular jobs and instead done contract work that provides us the flexibility to work on Cognito Mentoring, eating into our savings somewhat to cover the cost of living differences. This is a temporary arrangement and is not sustainable. We therefore intend to scale back our work on Cognito Mentoring to "maintenance mode" so that people can continue to benefit from the resources we've already collected, with minimal additional effort on our part, freeing us up to take regular jobs with more demanding time requirements.
We might revive Cognito Mentoring as a part-time or full-time endeavor in the future if there are significant changes to our beliefs about the traction, impact, and long-run financial viability of Cognito Mentoring. Part of the purpose of "maintenance mode" will be to leave open the possibility of such a revival if the idea does indeed have potential.
In this post, I discuss some of the factors that led us to change our view, the conditions under which we might revive Cognito Mentoring, and more details about how "maintenance mode" for Cognito Mentoring will look.
Reason #1: Downward update on social value
We do think that the work we've done on Cognito Mentoring so far has generated social value, and the continued presence of the website will add more value over time. However, our view has shifted in the direction of lower marginal social value from working on Cognito Mentoring full-time, relative to simply keeping the website live and doing occasional work to improve it. Specifically:
- It's quite possible that the lowest-hanging fruit with respect to the advisees who would be most receptive to our advice has already been plucked. We received the bulk of our advisees through LessWrong within the month after our initial posting. Other places where we've posted about our service have led to fewer advisees (more here).
- Of our website content, only a small fraction of the content gets significant traction (see our list of popular pages), so honing and promoting our best content might be a better strategy for improving social value than trying to create a comprehensive resource. This can be done while in maintenance mode, and does not require full-time effort on our part.
What might lead us to change our minds: If we continue to be contacted by large numbers of potentially high-impact people, or we get evidence that the advising we've already done has had significantly greater impact than we think it did, we'll update our social value upward.
Reason #2: Downward update on long-run financial viability
We have enough cash to go on for a few more months. But for Cognito Mentoring to be something that we work full time on, we need an eventual steady source of income from it. Around mid-March 2014, we came to the realization that charging advisees is not a viable revenue source, as Jonah described at the end of his post about how Cognito Mentoring can do the most good (see also this comment by Luke Muehlhauser and Jonah's response to it below the comment). At that point, we decided to focus more on our informational content and on looking for philanthropic funding.
Our effort at looking into philanthropic funding did give us a few leads, and some of them could plausibly result in us getting small grants. However, none of the leads we got pointed to potential steady long-term income sources. In other words, we don't think philanthropic funding is a viable long-term revenue model for Cognito Mentoring.
Our (anticipated) difficulty in getting philanthropic funding arises from two somewhat different reasons.
- What we're doing is somewhat new and does not fit the standard mold of educational grants. Educational foundations tend to give grants for fairly specific activities, and what we're doing does not seem to fit those.
- We haven't demonstrated significant traction or impact yet (even though we've had a reasonable amount of per capita impact, the total number of people we've influenced so far is relatively small). This circles back to Reason #1: funders' reluctance to fund us may in part stem from their belief that we won't have much social value, given our lack of traction so far. Insofar as funders' judgment carries some information value, this should also strengthen Reason #1.
What might lead us to change our minds: If we are contacted by a funder who is willing to bankroll us for over a year and also offer a convincing reason for why he/she thinks bankrolling us is a good idea (so that we're convinced that our funding can be sustained beyond a year) we'll change our minds.
Reason #3: Acquisition of knowledge and skills
One of the reasons we've been able to have an impact through Cognito Mentoring so far is that both Jonah and I have knowledge of many diverse topics related to the questions that our advisees have posed to us. But our knowledge is still woefully inadequate in a number of areas. In particular, many advisees have asked us questions in the realms of technology, entrepreneurship, and the job environment, and while we have pointed them to resources on these, firsthand experience, or close secondhand experience, would help us more effectively guide advisees. We intend to take jobs related to computer technology (in fields such as programming or data science), and these jobs might be at startups or put us in close contact with startups. This will better position us to return to mentoring later if we choose to resume it part-time or full-time.
Knowledge and skills we acquire working in the technology sector could also help us design better interfaces or websites that can more directly address the needs of our audience. So far, we've thought of ourselves as content-oriented people, so we've used standard off-the-shelf software such as WordPress (for our main website and blog) and MediaWiki (for our information wiki). Part of the reason is that we wanted to focus on content creation rather than interface design, but part of the reason we've stuck to these is that we didn't think we could design interfaces. Once we've acquired more programming and design experience, we might be more open to the idea of designing interfaces and software that can meet particular needs of our target audience.We might design an interface that helps people study more effectively, make better life decisions, or share reviews of courses and colleges, in a manner similar to softwares or websites such as Anki or Beeminder or Goodreads. There might also be potential for a more effective online resource that teaches programming than those in existence (e.g. Codecademy). It's not clear right now whether there exists a useful opportunity of this sort that we are particularly well-suited to, but with more coding experience, we'll at least be able to implement an idea of this sort if we decide it has promise.
Reason #4: Letting it brew in the background can give us a better idea of the potential
If we continue to gradually add content to the wiki, and continue to get links and traffic to it from other sources, it's likely that the traffic will grow slowly and steadily. The extent of organic growth will help us figure out how much promise Cognito Mentoring has. If our wiki gets to the point of steadily receiving thousands of pageviews a day, we will reconsider reviving Cognito Mentoring as a part-time or full-time endeavor. If, on the other hand, traffic remains at approximately the current level (about a hundred pageviews a day, once we exclude spikes arising from links from LessWrong and Marginal Revolution) then the idea is probably not worth revisiting, and we'll leave it in maintenance mode.
In addition, by maintaining contact with the people we've advised, we can get more insight into the sort of impact we've had, whether it is significant over the long term, and how it can be improved. This again can tell us whether our impact is sufficiently large as to make Cognito Mentoring worth reviving.
What "maintenance mode" entails
- We'll continue to have contact information available, but will scale back on personalized advising: People are welcome to contact us with questions and suggestions about content, but we will not generally offer detailed personalized responses or do research specific to individuals who contact us. We'll attempt to point people to relevant content we've already written, or to other resources we're already aware of that can address their concerns.
- The information wiki will remain live, and we will continue to make occasional improvements, but we won't have a time schedule of when particular improvements have to be implemented by.
- Existing blog posts will remain, but we probably won't be making many new blog posts. New blog posts will happen only if one of us has an idea that really seems worth sharing and for which the Cognito Mentoring blog is an ideal forum.
- We'll continue our administrative roles in the communities of existing Cognito Mentoring advisees
- We'll continue periodically reviewing the progress of people we've advised so far: This will help us get a better sense of how valuable our work has been, and can be useful should we choose to revive Cognito Mentoring.
- We'll continue to correspond with advisees we have so far (time permitting), though we'll give more priority to advisees who continue to maintain contact of their own accord and those whose activities seem to have higher impact potential.
- We'll try to get our best content linked from other sources, such as about.com: Sources like about.com are targeted at the general population. We can try to get linked to from there as an additional resource for the more intellectually curious population that's outside the core focus of about.com.
- We'll link more extensively to other sources that people can use: For instance, we can more emphatically point to 80,000 Hours for people who are interested in career advising in relation to effective altruist pursuits. We can point to about.com and College Confidential for more general information about mainstream institutions. We already make a number of recommendations on our website, but as we stop working actively, it becomes all the more important that people who come to us are appropriately redirected to other sources that can help them.
Conclusion and summary (TL;DR)
We (qua Cognito Mentoring) are grateful to LessWrong for being welcoming of our posts, offering constructive criticism, and providing us with some advisees we've enjoyed working with. We think that the work we've done has value, but don't think that there's enough marginal value from full-time work on Cognito Mentoring. We think we can do more good for ourselves and the world by switching Cognito Mentoring to maintenance mode and freeing our time currently spent on Cognito Mentoring for other pursuits. The material that we have already produced will continue to remain in the public domain and we hope that people will benefit from it. We may revisit our "maintenance mode" decision if new evidence changes our view regarding traction, impact, and long-run financial viability.
Truth: It's Not That Great
Rationality is pretty great. Just not quite as great as everyone here seems to think it is.
The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...
The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.
-Robin Hanson, "Who Loves Truth Most?"
A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."
I saw this, and commented:
<puts rubber Robin Hanson mask on>
What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.
</rubber Robin Hanson mask>
In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."
This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.
This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.
Or, consider statements like:
- Some truths don't matter much.
- People often have legitimate reasons for not wanting others to have certain truths.
- The value of truth often has to be weighed against other goals.
Do these statements sound heretical to you? But what about:
- Information can be perfectly accurate and also worthless.
- People often have legitimate reasons for not wanting other people to gain access to their private information.
- A desire for more information often has to be weighed against other goals.
I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.
There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.
Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.
Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?
Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information.
"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.
Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.
European Community Weekend 2014 retrospective
So finally - with two weeks distance to the first European LessWrong Community Weekend - we want to share the organizers’ perception of the event, including a short overview of what went well, what did not and what exceeded our expectations.
First and foremost we thank all the participants and speakers for helping us in making this such a great weekend. We had an incredible time and are very happy everything worked out as well as it did. In our opinion the event was a great success! Meeting everyone was excellent and we look forward to running a similar yet improved event in the future.
Questions to ask theist philosophers? I will soon be speaking with several
I am about to graduate from one of the only universities in the world that has a high concentration of high-caliber analytic philosophers who are theists. (Specifically, the University of Notre Dame, IN) So as not to miss this once-in-a-lifetime opportunity, I have sent out emails asking many of them if they would like to meet and discuss their theism with me. Several of them have responded already in the affirmative; fingers crossed for the rest. I'm really looking forward to this because these people are really smart, and have spent a lot of time thinking about this, so I expect them to have interesting and insightful things to say.
Do you have suggestions for questions I could ask them? My main question will of course be "Why do you believe in God?" and variants thereof, but it would be nice if I could say e.g. "How do you avoid the problem of X which is a major argument against theism?"
Questions I've already thought of:
1-Why do you believe in God?
2-What are the main arguments in favor of theism, in your opinion?
3-What about the problem of evil? What about objective morality: how do you make sense of it, and if you don't, then how do you justify God?
4-What about divine hiddenness? Why doesn't God make himself more easily known to us? For example, he could regularly send angels to deliver philosophical proofs on stone tablets to doubters.
5-How do you explain God's necessary existence? What about the "problem of many Gods," i.e. why can't people say the same thing about a slightly different version of God?
6-In what sense is God the fundamental entity, the uncaused cause, etc.? How do you square this with God's seeming complexity? (he is intelligent, after all) If minds are in fact simple, then how is that supposed to work?
I welcome more articulate reformulations of the above, as well as completely new ideas.
Siren worlds and the perils of over-optimised search
tl;dr An unconstrained search through possible future worlds is a dangerous way of choosing positive outcomes. Constrained, imperfect or under-optimised searches work better.
Some suggested methods for designing AI goals, or controlling AIs, involve unconstrained searches through possible future worlds. This post argues that this is a very dangerous thing to do, because of the risk of being tricked by "siren worlds" or "marketing worlds". The thought experiment starts with an AI designing a siren world to fool us, but that AI is not crucial to the argument: it's simply an intuition pump to show that siren worlds can exist. Once they exist, there is a non-zero chance of us being seduced by them during a unconstrained search, whatever the search criteria are. This is a feature of optimisation: satisficing and similar approaches don't have the same problems.
The AI builds the siren worlds
Imagine that you have a superintelligent AI that's not just badly programmed, or lethally indifferent, but actually evil. Of course, it has successfully concealed this fact, as "don't let humans think I'm evil" is a convergent instrumental goal for all AIs.
We've successfully constrained this evil AI in a Oracle-like fashion. We ask the AI to design future worlds and present them to human inspection, along with an implementation pathway to create those worlds. Then if we approve of those future worlds, the implementation pathway will cause them to exist (assume perfect deterministic implementation for the moment). The constraints we've programmed means that the AI will do all these steps honestly. Its opportunity to do evil is limited exclusively to its choice of worlds to present to us.
The AI will attempt to design a siren world: a world that seems irresistibly attractive while concealing hideous negative features. If the human mind is hackable in the crude sense - maybe through a series of coloured flashes - then the AI would design the siren world to be subtly full of these hacks. It might be that there is some standard of "irresistibly attractive" that is actually irresistibly attractive: the siren world would be full of genuine sirens.
Even without those types of approaches, there's so much manipulation the AI could indulge in. I could imagine myself (and many people on Less Wrong) falling for the following approach:




Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)