Is it immoral to have children?

15 jkaufman 22 October 2013 12:13PM

In "The Immorality of Having Children" (2013, pdf) Rachels presents the "Famine Relief Argument against Having Children":

Conceiving and raising a child costs hundreds of thousands of dollars; that money would be far better spent on famine relief; therefore, conceiving and raising children is immoral.

They present this as a special case of Peter Singer's argument from Famine, Affluence, and Morality (1972), which is why they haven't called it something more reasonable like the "Opportunity Cost Argument".

[Note: the use of "Famine Relief" here is in reference to Peter Singer's 1972 example, but famine relief is not where your money does the most good.  Treat the argument as "that money would be far better spent on GiveWell's top charities" or whatever organization you think is most effective.]

It's true that having and raising a child is very expensive. They use an estimate of $227k for the direct expenditure through age 18 while noting that college [1] and time costs could make this much higher. Let's use a higher estimate of $500k to account for these. Considered over twenty years, that's $25k/year or $2k/month. This puts it at the top of the range of expenses, next to housing. It's also true that this money can do a lot of good when spent on effective charities. At GiveWell's current best estimate of $2.3k this is enough money to save nearly one life per month. [2]

But perhaps we shouldn't be thinking of this money as an expense at all, and instead more as an investment? Could having kids be a contender for the most effective charity? That is, could having and raising kids be one of the most effective things you could do with your time and money?

For example you could convince your kid to be unusually generous, donating far more than they cost to raise. Except that it's much cheaper to convince other people's kids to be generous, and our influence on the adult behavior of our children is not that big. Alternatively, if you're unusually smart, by having kids you could help make there be more smart people in the future. But how many more generations will pass before we learn enough about the genetics of intelligence to make this aspect of parental genetics irrelevant? Rachels considers the idea that your having children might greatly benefit the world, and rightly finds it insufficient. While your child may do a lot of good, for the expense there are much better options. Having kids is not a contender for the most effective charity, or even very close.

Having kids is a special case of spending your time and money in ways that make you happy. A moral system for human beings needs to allow some amount of this. It's like working for $56k at a job you enjoy instead of getting $72k at a job you like less. [3] Or spending your free time reading instead of working extra hours building up a consulting business. Keeping in mind both the cost and that on average people don't seem to be happier parenting, if having kids is what would make you most happy for the expense in time and money then it seems justified.

(This is how Julia and I thought of it when deciding whether we should have kids.)

 

I also posted this on my blog.


[1] College is currently in a huge state of flux. Advertised costs are rising far faster than inflation as colleges realize they can get away with near perfect price discrimination in the form of "either pay the extremely high sticker price or give us all your financial data so we can determine exactly how much you can afford." At the same time online courses and mixed models are getting to where they can provide much of the value of traditional lecture courses, and in some ways do better. I have very little idea what to budget for college for a kid born now; likely costs range from "free" to "all you have".

[2] Rachels uses a much lower number:

Givewell.org, which assesses charities, estimates that a life is saved for every $205 spent on expanding immunization coverage for children in Africa Sub-Saharan—apparently one of the most cost-effective projects. See L. Brenzel et al. 2006, p. 401

Their Brenzel citation is to the Vaccine-Preventable Diseases section of the DCP2. The $205 number is "Estimated cost per death averted for the Traditional Immunization Program in Sub-Saharan Africa and South Asia" in table 20.5.

[3] This is a $16k difference, which comes from taking $500k over 20 years and dividing by two for the two parents, and then adding some for taxes.  Though the earnings difference is likely to last more like 40 years.

Does Checkers have simpler rules than Go?

14 jkaufman 13 August 2013 02:09AM

I've seen various contenders for the title of simplest abstract game that's interesting enough that a professional community could reasonably play it full time. While Go probably has the best ratio of interest to complexity, Checkers and Dots and Boxes might be simpler while remaining sufficiently interesting. [1] But is Checkers actually simpler than Go? If so, how much? How would we decide this?

Initially you might approach this by writing out rules. There's an elegant set for Go and I wrote some for Checkers, but English is a very flexible language. Perhaps my rules are underspecified? Perhaps they're overly verbose? It's hard to say.

A more objective test is to write a computer program that implements the rules. It needs to determine whether moves are valid, and identify a winner. The shorter the computer program, the simpler the rules of the game. This only gives you an upper bound on the complexity, because someone could come along and write a shorter one, but in general we expect that shorter programs imply shorter possible programs.

To investigate this, I wrote ones for each of the three games. I wrote them quickly, and they're kind of terse, but they represent the rules as efficiently as I could figure out. The one for Go is based off Tromp's definition of the rules while the other two implement the rules as they are in my head. This probably gives an advantage to Go because those rules had a lot of care go into them, but I'm not sure how much of one.

The programs as written have some excess information, such as comments, vaguely friendly error messages, whitespace, and meaningful variable names. I took a jscompiler-like pass over them to remove as much of this as possible, and making them nearly unreadable in the process. Then I ran them through a lossless compressor, gzip, and computed their sizes:

  • Checkers: 648 bytes
  • Dots and Boxes: 505 bytes
  • Go: 596 bytes

(The programs are on github. If you have suggestions for simplifying them further, send me a pull request.)


[1] Go is the most interesting of the three, and has stood up to centuries of analysis and play, but Dots and Boxes is surprisingly complex (pdf) and there used to be professional Checkers players. (I'm having a remarkably hard time determining if there are still Checkers professionals.)

I also posted this on my blog.

Valuing Sentience: Can They Suffer?

6 jkaufman 29 July 2013 12:39PM

In the recent discussions here about the value of animals several people have argued that what matters is "sentience", or the ability to feel. This goes back to at least Bentham with "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

Is "can they feel pain" or "can they feel pleasure" really the right question, though? Let's say we research the biological correlates of pleasure until we understand how to make a compact and efficient network of neurons that constantly experiences maximum pleasure. Because we've thrown out nearly everything else a brain does, this has the potential for orders of magnitude more sentience per gram of neurons than anything currently existing. A group of altruists intend to create a "happy neuron farm" of these: is this valuable?  How valuable?

(Or say a supervillian is creating a "sad neuron farm". How important is it that we stop them?  Does it matter at all?)

The Argument From Marginal Cases

15 jkaufman 26 July 2013 01:30PM

The argument from marginal cases claims that you can't both think that humans matter morally and that animals don't, because no reasonable set of criteria for moral worth cleanly separates all humans from all animals. For example, perhaps someone says that suffering only matters when it happens to something that has some bundle of capabilities like linguistic ability, compassion, and/or abstract reasoning. If livestock don't have these capabilities, however, then some people such as very young children probably don't either.

This is a strong argument, and it avoids the noncentral fallacy. Any set of qualities you value are going to vary over people and animals, and if you make a continuum there's not going to be a place you can draw a line that will fall above all animals and below all people. So why do I treat humans as the only entities that count morally?

If you asked me how many chickens I would be willing to kill to save your life, the answer is effectively "all of them". [1] This pins down two points on the continuum that I'm clear on: you and chickens. While I'm uncertain where along there things start getting up to significant levels, I think it's probably somewhere that includes no or almost no animals but nearly all humans. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like "value all humans equally; don't value animals" when that's not my real distinction, just the closest schelling point.

 

[1] Chicken extinction would make life worse for many other people, so I wouldn't actually do that, but not because of the effect on the chickens.

I also posted this on my blog.

Consumption Smoothing and Hedonic Adaptation

5 jkaufman 19 July 2013 02:41PM

Because earning capacity increases with age but enjoyment of spending decreases with additional money, standard economic theory predicts people will smooth their spending by borrowing to live beyond their means when young and paying it back when they're older. The idea is that you have some lifetime spending to split among all your future selves, so instead of having you-at-50 enjoy $45K/year of self-spending while you-at-25 struggles with just $20K/year, you should each spend $30K/year. [1] In practice, however, we mostly don't see people doing this, and I think that's actually very reasonable.

We can approximate your earnings over the course of your life by looking at how much everyone makes now, broken down by age:


(source)

We see incoming rising sharply with age in the 20s, slower in the 30s, plateauing through the 40s and 50s, and then declining with retirement. Add to this a small amount of growth in real wages over time, about 8% over the last 40 years, and we can see that people in their 20s earn significantly less than they expect to be earning over most of their life.

The standard model of people is that spending money makes you happier, and the first dollar goes farther than the last:

Add temporal discounting and that you can enjoy durable goods for longer if you buy them earlier and we should expect to see people borrowing heavily in their 20s and paying it back as they get older, but we mostly don't. We do see this some with buying houses, but in most other ways the typical 50 year-old is much less frugal than the typical 25 year-old. When we see young people living on borrowed money to support a lifestyle they would expect to enjoy later in life, we generally mock them. [2]

One response is to say that people are behaving foolishly and should borrow more. Why live thrifty in your 20s but not in your 40s? Either you should continue your thriftiness into your 40s, spend more in your 20s, or some combination of both. But this misses something important about human psychology: decreases in our standard of living are much more painful than increases are pleasant.

If you're earning a relatively small amount and living cheaply, and then earn more money and start living less frugally, this probably makes you happier. But if then something happens and you need to go back to living on less you'll probably be much less happy than you were the first time. Because individual incomes are much less predictable than cohort incomes, if you borrow a lot while young to consume at a higher level you might be anticipating a future level that you'll not reach or not sustain once reaching.

(This is why I try to be careful with luxuries.)

I also posted this on my blog.


[1] Why is that $30K instead of $32.5K, the average of $45K and $20K? To spend money you don't have yet you need to pay interest, which decreases the total amount you get to spend. But as long as the difference in enjoyment between $20K and $30K is larger than between $45K and $30K you still come out ahead.

[2] Though in this case I think the author has other reasons to dislike their subject than consumption habits.

Prioritizing Happiness

1 jkaufman 06 July 2013 04:01PM

When the limiting resource is money it's quite clear that we should prioritize the uses where it goes the farthest. If there are three organizations that can distribute antimalarial nets for $5/each, $50/each, and $500/each we should just give to the first one. Similarly, if I have $5 I could use it to have my electricity be generated by wind or I could use it to fund distribution of an additional antimalarial net. I can't spend that $5 on both, so I have to choose, and I choose based on which I think will do more good with the money.

When the limiting resource is happiness, however, prioritization comes less naturally. I could stop taking warm showers, take the bus instead of driving, spend less to donate more, go vegan, donate a kidney, not run fans in summer, or do any of a very large number of things to make the world better at some cost to me. The more I do, the better, but the less happy I am. If I chose options without looking at how they trade off my happiness against benefit to others it would be like choosing what clothes to buy based on how much I would enjoy wearing them and not considering how much they cost.

I also posted this on my blog

Is our continued existence evidence that Mutually Assured Destruction worked?

7 jkaufman 18 June 2013 02:40PM

The standard view of Mutually Assured Distruction (MAD) is something like:

During the cold war the US and USSR had weapons capable of immense destruction, but no matter how tense things got they never used them because they knew how bad that would be. While MAD is a terrifying thing, it did work, this time.

Occasionally people will reply with an argument like:

If any of several near-miss incidents had gone even slightly differently, both sides would have launched their missiles and we wouldn't be here today looking back. In a sense this was an experiment where the only outcome we could observe was success: nukes would have meant no observers, no nukes and we're still here. So we don't actually know how useful MAD was.

This is an anthropic argument, an attempt to handle the bias that comes from a link between outcomes and the number of people who can observe them. Imagine we were trying to figure out whether flipping "heads" was more likely than flipping "tails", but there was a coin demon that killed everyone if "tails" came up. Either we would see "heads" flipped, or we would see nothing at all. We're not able to sample from the "tails: everyone-dies" worlds. Even if the demon responds to tails by killing everyone only 40% of the time, we're still going to over-sample the happy-heads outcome.

Applying the anthropic principle here, however, requires that a failure of MAD really would have killed everyone. While it would have killed billions, and made major parts of the world uninhabitable, still many people would have survived. [1] How much would we have rebuilt? What would be the population now? If the cold war had gone hot and the US and USSR had fallen into wiping each other out, what would 2013 be like? Roughly, we're oversampling the no-nukes outcome by the ratio of our current population to the population there would have been in a yes-nukes outcome, and the less lopsided that ratio is the more evidence that MAD did work after all.


[1] For this wikipedia cites: The global health effects of nuclear war (1982), Long-term worldwide effects of multiple nuclear-weapons detonations (1975). Some looking online also turns up an Accelerating Future blog post. I haven't read them thoroughly, and I don't know much about the research here.

I also posted this on my blog

All-pay auction for charity?

5 jkaufman 12 June 2013 12:46PM

While in a standard auction you have to pay your bid only if you win, in an all-pay auction you pay whether or not you win. The standard example is a dollar auction where you're selling a dollar. Bidding a penny to get a dollar seems reasonable, but someone else then might bid two cents. The bidding can keep going even past a dollar, and the more people fighting for the dollar the more the person selling it makes. Bidding-fee auctions are similar, where each bid you make costs money. You might remember Swoopo? They used to put up ads like "An iPad just sold for $21.32!" not mentioning that the participants overall had spent more than the retail cost of the iPad on bidding fees. Eventually people caught on and they went bankrupt.

In a less scammy vein, however, this is also how competitive prizes work. In the X-Prize teams spent over $100M in competition for a $10M prize. I can't find an estimate for how much people spent to win the $1M Netflix Prize but when you look at the number of people and number of teams it was probably well above $1M.

Could we use this for charity? Imagine a donor thought two charities were both excellent and had very similar returns, but they knew lots of other people strongly disagreed and preferred one or the other. By offering to donate $X to the charity that received the most in donations, could they move more than $X to the charity of their choice? It might be even better to make the criterion be the most independent donations of at least $Y, because getting more people to donate has value in terms of expected future donations.

(I suggested something similar a few months ago in a comment on my post on donation matching, but hadn't thought about prizes at the time.)

Weak evidence that eating vegetables makes you live longer

8 jkaufman 10 June 2013 01:09PM

People vary in how much they can taste bitter things. If you go around giving people the chemical Phenylthiocarbamide (PTC), which you shouldn't do, because it is toxic, you'll find that some people taste it as a strongly bitter while others can't taste it at all. Same with 6-n-propylthiouracil (PROP). While these aren't common in food, they're very similar to chemicals that are in a lot of foods, so you might think that how much you can taste PTC or PROP might influence what foods you like.

We can test this. Give people various foods that there is dispute about the bitterness of, and then compare their preferences to their sensitivity to PTC or PROP. Several studies have done this:

Vegetables like broccoli are often thought to be good for you, so shouldn't we expect to see people who taste PROP and PTC not live as long, because they're eating less of those? Ideally we would test how sensitive people are to bitterness at some youngish age, and then watch them for the next 80 years to see how long they lived. But 80 years is a long time to wait, and you're going to need a large sample because we don't expect the effect to be that big. Another option would be to measure sensitivity to bitterness, and see whether it decreased with age in the same way we would expect if the people with increased bitterness sensitivity were dying earlier. But this is going to be impractical to separate from the hypothesis that simply individual people lose some of their sense of taste over time.

Luckily, it turns out that this tasting ability is very strongly genetic. People with one variant of the gene TAS2R38 can nearly always taste PTC while people with another variant almost never can. So we can sample people at any age and get an estimate of how likely they were to have avoided vegetables for taste reasons. Are older people less likely to have the gene variant for tasting bitterness? It turns out they are. In Bitter Taste Receptor Polymorphisms and Human Aging (2012, n=941) they tested Calabrians for their bitterness gene variant, and did find that older people were less likely to have the variant for detecting bitterness:

So can we say that (a) eating vegetables will help you live longer and (b) if vegetables taste bitter to you should eat them anyway to get benefit (a)? Unfortunately it's not that clear. Vegetables aren't the only common food with these bitter compounds, so it might be something else. Other bitter-to-some foods that these non-bitter-tasters might have been eating more of include coffee, tea, grapefruit juice, soy, cigarettes (maybe), and probably other things we haven't tested. There's also the possibility that the older and younger participants in the longevity study aren't the same group of people genetically, and what they're actually capturing is population changes in Calabria. One way to test that would be to repeat the study in several different places, as we would expect population drift to be independent of sensitivity to bitterness.

The impact of whole brain emulation

3 jkaufman 14 May 2013 07:59PM

At some point in the future we may be able to scan someone's brain at very high resolution and "run" them on a computer. [1] When I first heard this as a teenager I thought it was interesting but not hugely important. Running people faster or slower and keeping backups came immediately to mind, and Wikipedia adds space travel, but those three by themselves don't seem like they change that much. Thinking speed doesn't seem to be major limiting factor in coming up with good ideas, we generally only restore from backups in cases of rare failure, and while space travel would dramatically affect the ability of humans to spread [2] it doesn't sound like it changes the conditions of life.

This actually undersells emulation by quite a lot. For example "backups" let you repeatedly run the same copy of a person on different information. You can find identify a person when they're at their intellectual or creative best, and give them an hour to think about a new situation. Add in potentially increased simulation speed and parallelism, and you could run lots of these ones looking into all sorts of candidate approaches to problems.

With emulations you can get around the mental overhead of keeping all your assumptions about a direction of thought in your mind at once. I might not know if X is true, and spend a while thinking about what should happen if it's true and another while about what if it's not, but it's hard for me to get past the problem that I'm still uncertain about X. With an emulation that you can reset to a saved state however, you could have multiple runs where you give some emulations a strong assurance that X is true and some a strong assurance that X is false

You can also run randomized controlled trials where the experimental group and the control group are the same person. This should hugely bring down experimental cost and noise, allowing us to make major and rapid progress in discovering what works in education, motivation, and productivity.

(Backups stop being about error recovery and fundamentally change the way an emulation is useful.)

These ideas aren't new here [3] but I don't see them often in discussions of the impact of emulating people. I also suspect there are many more creative ways of using emulation; what else could you do with it?


[1] I think this is a long way off but don't see any reasons why it wouldn't be possible.

[2] Which has a big effect on estimates of the number of future people.

[3] I think most of these ideas fo back to Carl Schulman's 2010 Whole Brain Emulation and the Evolution of Superorganisms.

I also posted this on my blog

View more: Prev | Next