Discussion of concrete near-to-middle term trends in AI

13 Punoxysm 08 February 2015 10:05PM

Instead of prognosticating on AGI/Strong AI/Singularities, I'd like to discuss more concrete advancements to expect in the near-term in AI. I invite those who have an interest in AI to discuss predictions or interesting trends they've observed.

This discussion should be useful for anyone looking to research or work in companies involved in AI, and might guide longer-term predictions.

With that, here are my predictions for the next 5-10 years in AI. This is mostly straightforward extrapolation, so it won't excite those who know about these areas but may interest those who don't:

  • Speech Processing, the task of turning the spoken words into text, will continue to improve until it is essentially a solved problem. Smartphones and even weaker devices will be capable of quite accurately transcribing heavily-accented speech in many languages and noisy environments. This is the simple continuation of the rapid improvements in speech processing that have allowed brought us from Dragon Naturally-Speaking to Google Now and Siri.
  • Assistant and intent-based (they try to figure out the "intent" of your input) systems, like Siri, that need to interpret a sentence as a particular command they are capable of, will become substantially more accurate and varied and take cues like tone and emphasis into account. So for example, if you're looking for directions you won't have to repeat yourself in an increasingly loud, slowed and annoyed voice. You'll be able to phrase your requests naturally and conversationally. New tasks like "Should I get this rash checked out" will be available. A substantial degree of personalization and use of your personal history might also allow "show me something funny/sad/stimulating [from the internet]".
  • Natural language processing, the task of parsing the syntax and semantics of language, will improve substantially. Look at this list of traditional tasks with standard benchmarks: on Wikipedia. Every one of these tasks will have a several percentage point improvement, particularly in the understudied areas of informal text (Chat logs, tweet, anywhere where grammar and vocabulary are less rigorous). It won't get so good that it can be confused with solving AI-complete aspects of NLP, but it will allow vast improvements in text mining and information extraction. For instance, search queries like "What papers are critical of VerHoeven and Michaels '08" or "Summarize what twitter thinks of the 2018 superbowl" will be answerable. Open source libraries will continue to improve from their current just-above-boutique state (NLTK, CoreNLP). Medical diagnosis based on analysis of medical texts will be a major area of research. Large-scale analysis of scientific literature in areas where it is difficult for researchers to read all relevant texts will be another. Machine translation will not be ready for most diplomatic business, but it will be very very good across a wide variety of languages.
  • Computer Vision, interpreting the geometry and contents of images an video, will undergo tremendous advances. In act, it already has in the past 5 years, but now it makes sense for major efforts, academic, military and industrial, to try to integrate different modules that have been developed for subtasks like object recognition, motion/gesture recognition, segmentation, etc. I think the single biggest impact this will have will be the foundation for robotics development, since a lot of the arduous work of interpreting sensor input will be partly taken care of by excellent vision libraries.  Those general foundations will make it easy to program specialist tasks (like differentiating weeds from crops in an image, or identifying activity associated with crime in a video). This will be complemented by a general proliferation of cheap high-quality cameras and other sensors. Augmented reality also rests on computer vision, and the promise of the most fanciful tech demo videos will be realized in practice. 
  • Robotics will advance rapidly. The foundational factors of computer vision, growing availability of cheap platforms, and fast progress on tasks like motion planning and grasping has the potential to fuel an explosion of smarter industrial and consumer robotics that can perform more complex and unpredictable tasks than most current robots. Prototype ideas like search-and-rescue robots, more complex drones, and autonomous vehicles will come to fruition (though 10 years may be too short a time frame for ubiquity). Simpler robots with exotic chemical sensors will have important applications in medical and environmental research.

 

Attempted Telekinesis

82 AnnaSalamon 07 February 2015 06:53PM

Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.

Summary:  I’d like to share some techniques that made a large difference for me, and for several other folks I shared them with.  They are techniques for reducing stress, social shame, and certain other kinds of “wasted effort”.  These techniques are less developed and rigorous than the techniques that CFAR teaches in our workshops -- for example, they currently only work for perhaps 1/3rd of the dozen or so people I’ve shared them with -- but they’ve made a large enough impact for that 1/3rd that I wanted to share them with the larger group.  I’ll share them through a sequence of stories and metaphors, because, for now, that is what I have.

continue reading »

Some recent evidence against the Big Bang

6 JStewart 07 January 2015 05:06AM

I am submitting this on behalf of MazeHatter, who originally posted it here in the most recent open tread. Go there to upvote if you like this submission.

Begin MazeHatter:

I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:

Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.

Furthermore, a cold spot extends over a patch of sky that is much larger than expected.

The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.

“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.

... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scale than we can observe. ...

“Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting,” says Professor Efstathiou.

http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe

We are also getting a better look at galaxies at greater distances, thinking they would all be young galaxies, and finding they are not:

The finding raises new questions about how these galaxies formed so rapidly and why they stopped forming stars so early. It is an enigma that these galaxies seem to come out of nowhere.

http://carnegiescience.edu/news/some_galaxies_early_universe_grew_quickly

http://mq.edu.au/newsroom/2014/03/11/granny-galaxies-discovered-in-the-early-universe/

The newly classified galaxies are striking in that they look a lot like those in today's universe, with disks, bars and spiral arms. But theorists predict that these should have taken another 2 billion years to begin to form, so things seem to have been settling down a lot earlier than expected.

B. D. Simmons et al. Galaxy Zoo: CANDELS Barred Disks and Bar Fractions. Monthly Notices of the Royal Astronomical Society, 2014 DOI: 10.1093/mnras/stu1817

http://www.sciencedaily.com/releases/2014/10/141030101241.htm

The findings cast doubt on current models of galaxy formation, which struggle to explain how these remote and young galaxies grew so big so fast.

http://www.nasa.gov/jpl/spitzer/splash-project-dives-deep-for-galaxies/#.VBxS4o938jg

Although it seems we don't have to look so far away to find evidence that galaxy formation is inconsistent with the Big Bang timeline.

If the modern galaxy formation theory were right, these dwarf galaxies simply wouldn't exist.

Merrick and study lead Marcel Pawlowski consider themselves part of a small-but-growing group of experts questioning the wisdom of current astronomical models.

"When you have a clear contradiction like this, you ought to focus on it," Merritt said. "This is how progress in science is made."

http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm

http://arxiv.org/abs/1406.1799

Another observation is that lithium abundances are way too low for the theory in other places, not just here:

A star cluster some 80,000 light-years from Earth looks mysteriously deficient in the element lithium, just like nearby stars, astronomers reported on Wednesday.

That curious deficiency suggests that astrophysicists either don't fully understand the big bang, they suggest, or else don't fully understand the way that stars work.

http://news.nationalgeographic.com/news/2014/09/140910-space-lithium-m54-star-cluster-science/

It also seems there is larger scale structure continually being discovered larger than the Big Bang is thought to account for:

"The first odd thing we noticed was that some of the quasars' rotation axes were aligned with each other -- despite the fact that these quasars are separated by billions of light-years," said Hutsemékers. The team then went further and looked to see if the rotation axes were linked, not just to each other, but also to the structure of the Universe on large scales at that time.

"The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos," concludes Dominique Sluse.

http://www.sciencedaily.com/releases/2014/11/141119084506.htm

D. Hutsemékers, L. Braibant, V. Pelgrims, D. Sluse. Alignment of quasar polarizations with large-scale structures. Astronomy & Astrophysics, 2014

Dr Clowes said: "While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe. This is hugely exciting -- not least because it runs counter to our current understanding of the scale of the universe.

http://www.sciencedaily.com/releases/2013/01/130111092539.htm

These observations have been made just recently. It seems that in the 1980's, when I was first introduced to the Big Bang as a child, the experts in the field knew then there were problems with it, and devised inflation as a solution. And today, the validity of that solution is being called into question by those same experts:

In light of these arguments, the oft-cited claim that cosmological data have verified the central predictions of inflationary theory is misleading, at best. What one can say is that data have confirmed predictions of the naive inflationary theory as we understood it before 1983, but this theory is not inflationary cosmology as understood today. The naive theory supposes that inflation leads to a predictable outcome governed by the laws of classical physics. The truth is that quantum physics rules inflation, and anything that can happen will happen. And if inflationary theory makes no firm predictions, what is its point?

http://www.physics.princeton.edu/~steinh/0411036.pdf

What are the odds 2015 will be more like 2014 where we (again) found larger and older galaxies at greater distances, or will it be more like 1983?

Compartmentalizing: Effective Altruism and Abortion

23 Dias 04 January 2015 11:48PM

Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.

Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.

A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.

Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.

Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people:

  • Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
  • Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
  • Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?

Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.

Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.

Moral Uncertainty

In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.

This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.

And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.

One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.

Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table

abortion table 1

In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.

However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.

abortion table 2

Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.

Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.

Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.

We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.

Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.

abortion table 3

We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:

  • If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
  • If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247

Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.

Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.

abortion table 4

Other EA concepts and their applications to this issue

Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.

Not really people

One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.

Not people yet

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.

Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.

Replaceability

Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.

The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.

If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.

Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.

Autonomy

Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.

Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.

Deontology

An argument often used on the opposite side  – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.

I didn’t ask for this

Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.

However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.

Infanticide is okay too

A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.

Moral Universalism

A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.

This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.

I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.

May we discuss this?

Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?

Nothing to do with you

A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.

Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:

  • EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
  • EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
  • EAs have opinions on the far future, yet live in the present

Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.

Too controversial

We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.

Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.

However, the EA movement is no stranger to controversy.

  • There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
  • There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.

Not worthy of discussion

Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.

However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.

Conclusion

People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.

 


  1. There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality. 
  2. I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute.