Wiki Contributions

Comments

TobyC1y10

That's a nice way of looking at it. It's still not very clear to me why the SIA approach of apportioning among possible observers is something you should want to do. But it definitely feels useful to know that that's one way of interpreting what SIA is saying.

TobyC1y20

That's a fair point! I have probably undersold the idea here. I've edited the post to add a comment about this.

TobyC1y21

You raise lots of good objections there. I think most of them are addressed quite well in the book though. You don't need any money, because it seems to be online for free: https://www.stafforini.com/docs/Parfit%20-%20Reasons%20and%20persons.pdf And if you're short of time it's probably only the last chapter you need to read. I really disagree with the suggestion that there's nothing to learn from ethical philosophy books.

For point 1: Yes you can value other things, but even if people's quality of life is only a part of what you value, the mere-addition paradox raises problems for that part of what you value.

For point 2:That's not really an objection to the argument.

For point 3: I don't think the argument depends on the ability to precisely aggregate happiness. The graphs are helpful ways of conveying the idea with pictures, but the ability to quantify a population's happiness and plot it on a graph is not essential (and obviously impossible in practice, whatever your stance on ethics). For the thought experiment, it's enough to imagine a large population at roughly the same quality of life, then adding new people at a lower quality of life, then increasing their quality of life by a lot and only slightly lowering the quality of life of the original people, then repeating, etc. The reference to what you are doing to the 'total' and 'average' as this happens is supposed to be particularly addressed at those people who claim to value the 'total', or 'average', happiness I think. For the key idea, you can keep things more vague, and the argument still carries force.

For point 4: You can try to value things about the distribution of happiness, as a way out. I remember that's discussed in the book as well, as are a number of other different approaches you could try to take to population ethics, though I don't remember the details. Ultimately, I'm not sure what step in the chain of argument that would help you to reject.

On the non-transitive preferences being ok: that's a fair take, and something like this is ultimately what Parfit himself tried to do I think. He didn't like the repugnant conclusion, hence why he gave it that name. He didn't want to just say non-transitive preferences were fine, but he did try to say that certain populations were incomparable, so as to break the chain of the argument. There's a paper about it here which I haven't looked at too much but maybe you'd agree with: https://www.stafforini.com/docs/Parfit%20-%20Can%20we%20avoid%20the%20repugnant%20conclusion.pdf

TobyC1y10

Is that definitely right? I need to have an in-depth read of it, which I won't have time for for a few days, but from a skim it sounds like they admit that FNC also leads to the same conclusions as SIA for the presumptuous philosopher, but then they also argue that isn't as problematic as it seems?

TobyC1y30

Thanks for the comment! That's definitely an important philosophical problem that I very much glossed over in the concluding section.

It's sort of orthogonal to the main point of the post, but I will briefly say this: 10 years ago I would have agreed with your point of view completely. I believed in the slogan you sometimes hear people say: "we're in favour of making people happy, and neutral about making happy people." But now I don't agree with this. The main thing that changed my mind was reading Reasons+Persons, and in particular the "mere-addition paradox". That's convinced me that if you try to be neutral on making new happy people, then you end up with non-transitive preferences, and that seems worse to me than just accepting that maybe I do care about making happy people after all.

Maybe you're already well aware of these arguments and haven't been convinced, which is fair enough (would be interested to hear more about why), but thought I would share in case you're not.

TobyC1y20

I agree that skepticism is appropriate, but I don't think just ignoring anthropic reasoning completely is an answer. If we want to make decisions on an issue where anthropics is relevant, then we have to have a way of coming up with probabilistic estimates about these questions somehow. Whatever framework you use to do that, you will be taking some stance on anthropic reasoning. Once you're dealing with an anthropic question, there is no such thing as a non-anthropic framework that you can fall back on instead (I tried to make that clear in the boy-girl example discussed in the post).

The answer could just be extreme pessimism: maybe there just is no good way of making decisions about these questions. But that seems like it goes too far. If you needed to estimate the probability that your DNA contained a certain genetic mutation that affected about 30% of the population, then I think 30% really would be a good estimate to go for (absent any other information). I think it's something all of us would be perfectly happy doing. But you're technically invoking the self-sampling assumption there. Strictly speaking, that's an anthropic question. It concerns indexical information ("*I* have this mutation"). If you like, you're making the assumption that someone without that mutation would still be in your observer reference class.

Once you've allowed a conclusion like that, then you have to let someone use Bayes rule on it. i.e. if they learn that they do have a particular mutation, then hypotheses that would make that mutation more prevalent should be considered more likely. Now you're doing anthropics proper. There is nothing conceptually which distinguishes this from the chain of reasoning used in the Doomsday Argument.

TobyC5y10

I am struggling to follow this anthropic shadow argument. Perhaps someone can help me see what I am getting wrong.

Suppose that every million years on the dot, some catastrophic event happens with probability P (or fails to happen with probability 1-P). Suppose that if the event happens at one of these times, it destroys all life, permanently, with probability 0.1. Suppose that P is unknown, and we initially adopt a prior for it which is uniform between 0 and 1.

Now suppose that by examining the historical record we can discover exactly how many times the event has occurred in Earth's history. Naively, we can then update our prior based on this evidence, and we get a posterior distribution sharply peaked at (# of times event has occurred) / (# of times event could have occurred). I will call this the 'naive' approach.

My understanding of the paper is that they are claiming this 'naive' approach is wrong, and it is wrong because of observer selection effects. In particular they claim it gives an underestimate of P. Their argument for this appears to be the following: if you pick a fixed value of P, and simulate history a large number of times, then in the cases where an observer like us evolves, the observer's calculation of (# of times event has occurred) / (# of times event could have occurred) will on average be significantly below the true value of P. This is because observers are more likely to evolve after periods of unusually low catastrophic activity.

What I am currently not happy with is the following: shouldn't you run the simulation a large number of times, not with fixed value of P, but with P chosen from the prior? And if you do that, I find their claim less obvious. Suppose for simplicity that instead of having a uniform prior, P is equally likely to take the value 0.1 or 0.9. Simulate history some large number of times. Half will be 0.1 worlds and half will be 0.9 worlds. Under the naive approach, more 0.9 world observers will think they are in the 0.1 world than in the paper's approach, so they are more wrong, but there are also very few 0.9 world observers anyway (there is approximately a 10% chance of extinction per million years in this world). The vast majority of observers are 0.1 world observers, confident that they are 0.1 world observers (overconfident according to the paper), and they are right. If you just look at fixed values of P you seem to be ignoring the fact that observers are more likely to arise in worlds where P is smaller. When you take this fact into account, maybe it can justify the 'naive' underestimate?

This is a bit vague, but I'm just trying to explain my feeling that simulating the world many times at fixed P is not obviously the right thing to do (I may also be misunderstanding the argument of the paper and this isn't really what they are doing).

To state my issue another way, although their argument seems plausible from one point of view, I am struggling to understand WHY the 'naive' argument is wrong. All you are doing is applying Bayes theorem, and conditioning on the evidence, which is the historical record of when the event did or did not occur. What could be wrong with that? I can only see it being wrong if there is some additional evidence you should be conditioning on as well which you are missing out, but I can't see what that additional evidence could be in this context. It cannot be your existence, because the probability of your existence loses its dependence on P once the number of past occurrences of the event is given.