All of Jonah Wilberg's Comments + Replies

Thanks for the comment - the reason I focus on cosmic unfairness here is because I addressed local unfairness in a previous post in the sequence - apologies this wasn't clear, I've now added a hyperlink to clarify. 

I don't agree that the challenges of Dostoevsky etc are only about local unfairness though: as I say I think it's typically a mixture of local and cosmic unfairness that are not clearly distinguished. 

The 'problem from the lack of intervention' that you mention is much discussed by people in this context, so presumably they think it is... (read more)

Yes, very much agree with those points. Virtue ethics is another angle to come at the same point that there's a process whereby you internalise system 2 beliefs into system 1. Virtues need to be practised and learned, not just appreciated theoretically. That's why stoicism has been thought of (e.g. by Pierre Hadot) as promoting 'spiritual exercises' rather than systematic philosophy - I draw some further connections to stoicism in the next post in the sequence.

Thanks, yes good to see people independently arriving at a similar conclusion.

OK 'impossible' is too strong, I should have said 'extremely difficult'. That was my point in footnote 3 of the post. Most people would take the fact that it has implications like needing to "maximize splits of good experiences" (I assume you mean maximise the number of splits) as a reductio ad absurdum, due to the fact that this is massively different from our normal intuitions about what we should do. But some people have tried to take that approach, like in the article I mentioned in the footnote. If you or someone else can come up with a consistent and convincing decision approach that involves branch counting I would genuinely love to see it!

I'm not at all saying the experiences of a person in a low-weight world are less valuable than a person in a high-weight world. Just that when you are considering possible futures in a decision-theoretic framework you need to apply the weights (because weight is equivalent to probability). 

Wallace's useful achievement in this context is to show that there exists a set of axioms that makes this work, and this includes branch-indifference.

This is useful because makes clear the way in which the branch-counting approach you're suggesting is in conflict wi... (read more)

1Signer
It doesn't matter whether you call your multiplier "probability" or "value" if it results in your decision to not care about low-measure branch. The only difference is that probability is supposed to be about knowledge, and Wallace's argument involving arbitrary assumption, not only physics, means it's not probability, but value - there is no reason to value knowledge of your low-measure instances less. It doesn't? Nothing stops you from making decisions in a world where you are constantly splitting. You can try to maximize splits of good experiences or something. It just wouldn't be the same decisions you would make without knowledge of splits, but why new physical knowledge shouldn't change your decisions?

First of all, macroscopical indistinguishability is not fundamental physical property - branching indifference is additional assumption, so I don't see how it's not as arbitrary as branch counting.

 

You're right it's not a fundamental physical property - the overall philosophical framework here is that things can be real - as emergent entities - without being fundamental physical properties. Things like lions, and chairs are other examples.

But more importantly, branching indifference assumption is not the same as informal "not caring about macroscopica

... (read more)
1Signer
And counted branches. His definition leads to contradiction with informal intuition that motivates consideration of macroscopical indistinguishability in the first place. Why? Wallace's argument is just "you don't care about some irrelevant microscopic differences, so let me write this assumption that is superficially related to that preference, and here - it implies the Born rule". Given MWI, there is nothing wrong physically or rationally in valuing your instances equally whatever their measure is. Their thoughts and experiences don't depend on measure the same way they don't depend on thickness or mass of a computer implementing them. You can rationally not care about irrelevant microscopic differences and still care about number of your thin instances.

OK but your original comment reads like you're offering things not mattering cosmically as a reason for thinking MWI doesn't change anything (if that's not a reason, then you haven't given any reason, you've just stated your view). And I think that's a good argument - if you have general reasons that are independent of specific physics to think nothing matters (cosmically), then it will follow that nothing matters in MWI as well. I was responding to that argument.

I don't get why you would say that the preferences are fine-grained, it kinda seems obvious to me that they are not fine-grained. You don't care about whether worlds that are macroscopically indistinguishable are distinguishable at the quantum level, because you are yourself macroscopic. That's why branching indifference is not arbitrary. Quantum immortality is a whole other controversial story.

3Signer
Because scale doesn't matter - it doesn't matter if you are implemented on thick or narrow computer. First of all, macroscopical indistinguishability is not fundamental physical property - branching indifference is additional assumption, so I don't see how it's not as arbitrary as branch counting. But more importantly, branching indifference assumption is not the same as informal "not caring about macroscopically indistinguishable differences"! As Wallace showed, branching indifference implies the Born rule implies you almost shouldn't care about you in a branch with a measure of 0.000001 even though it may involve drastic macroscopic difference for you in that branch. You being macroscopic doesn't imply you shouldn't care about your low-measure instances.

You're right that you can just take whatever approximation you make at the macroscopic level ('sunny') and convert that into a metric for counting worlds. But the point is that everyone will acknowledge that the counting part is arbitrary from the perspective of fundamental physics - but you can remove the arbitrariness that derives from fine-graining, by focusing on the weight. (That is kind of the whole point of a mathematical measure.)

3TAG
The macroscopically different branches and their weights? Focussing on the weight isn't obviously correct , ethically. You cant assume that the answer to "what do I expect to see" will work the same as the answer to "what should I do". Is-ought gap and all that. Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else's experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don't decline in moral worth with distance. So why should they decline with lower quantum mechanical measure? There is quandary here: sticking to the usual "adds up to normality" principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone's moral worth on the basis of factors that aren't intrinsic to them, means you should not discount..and that the usual decision theory does not apply. Basically, there is a tension between four things Rationalists are inclined to believe in:- * Some kind of MWI is true. * Some kind of utilitarian and universalist ethics is true. * Subjective things like suffering are ethically relevant. It's not all about number of kittens * It's all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.
4Signer
But why would you want to remove this arbitrariness? Your preferences are fine-grained anyway, so why retain classical counting, but deny counting in the space of wavefunction? It's like saying "dividing world into people and their welfare is arbitrary - let's focus on measuring mass of a space region". The point is you can't remove all decision-theoretic arbitrariness from MWI - "branching indifference" is just arbitrary ethical constraint that is equivalent to valuing measure for no reason, and without it fundamental physics, that works like MWI, does not prevent you from making decisions as if quantum immortality works.

OK I think I see where you're coming from - but I do think the unimaginable bigness of the universe has more 'irrelevance' implications for a consequentialist view which tries to consider valuable states of the universe than for a virtue approach which considers valuable states of yourself. Also if you think the implication of physics is that everything is irrelevant, that seems like an important implication in it's own right, and different from 'normality' (the normal way most people think about ethics, which assumes that some things actually are relevant).

2Dagon
Note that the argument whether MWI changes anything is very different from the argument about what matters and why.  I think it doesn't change anything, independently of how much what things in-universe matter. Separately, I tend to think "mattering is local".  I don't argue as strongly for this, because it's (recursively) a more personal intuition, less supported by type-2 thinking.  

Thanks for the interesting comments.

You're right, I didn't discuss the possibility of infinite numbers of branches, though as you suggest this leads to essentially the same conclusion as I reach in the case of finite branches, which is that it causes problems for consequentialist ethics (Joe Carlsmith's Infinite Ethics is good on this). If what you mean by 'normalize everything' is to only consider the quantum weights (which are finite as mathematical measures) and not the number of worlds, then that seems more a case of ignoring those problems rather than... (read more)

2dr_s
I mean that the amount of universes that is created will be created anyway, just as a consequence of time passing. So it doesn't matter anyway. If your actions e.g. cause misery in 20% of those worlds, then the fraction is all that matters; the worlds will exist anyway, and the total amount is not something you're affecting or controlling. I honestly don't think decoherence means the worlds are indefinite. I think it means they are an infinite continuum with the cardinality of the reals. Decoherence is just something you observe when you divide system from environment, in reality the Universe should have only a single, always coherent, giant wavefunction.

Very useful post, thanks. While the 'talking past each other' is frustrating, the 'not necessarily disagreeing' suggests the possibility of establishing surprising areas of consensus. And it might be interesting to explore further what exactly that consensus is. For example:

Yann suggested that there was no existential risk because we will solve it

I'm sure the air of paradox here (because you can't solve a problem that doesn't exist) is intentional, but if we drill down, should we conclude that Yann actually agrees that there is an existential risk (ju... (read more)

4Steven Byrnes
Yeah, like I said, I don’t think that one in particular was a major dynamic, just one thing I thought worth mentioning. I think one could rephrase what they said slightly to get a very similar disagreement minus the talking-past-each-other. Like, for example, if every human jumped off a cliff simultaneously, that would cause extinction. Is that an “x-risk”? No, because it’s never going to happen. We don’t need any “let’s not all simultaneously jump off a cliff” activist movements, or any “let’s not all simultaneously jump off a cliff” laws, or any “let’s not all simultaneously jump off a cliff” fields of technical research, or anything like that. That’s obviously a parody, but Yann is kinda in that direction regarding AI. I think his perspective is: We don’t need activists, we don’t need laws, we don’t need research. Without any of those things, AI extinction is still not going to happen, just because that’s the natural consequence of normal human behavior and institutions doing normal stuff that they’ve always done. I think “Yann thinks he knows in outline how to solve the problem” is maybe giving the wrong impression here. I think he thinks the alignment problem is just a really easy problem with a really obvious solution. I don’t think he’s giving himself any credit. I think he thinks anyone looking at the source code for a future human-level AI would be equally capable of making it subservient with just a moment’s thought. His paper didn’t really say “here’s my brilliant plan for how to solve the alignment problem”, the vibe was more like “oh and by the way you should obviously choose a cost function that makes your AI kind and subservient” as a side-comment sentence or two. (Details here)

I'd be interested to know more about the make-up of the audience e.g. whether they were AI researchers or interested general public. Having followed recent mainstream coverage of the existential risk from AI, my sense is that the pro-X-risk arguments have been spelled out more clearly and in more detail (within the constraints of mainstream media) than the anti-X-risk ones (which makes sense for an audience who may not have previously been aware of detailed pro- arguments, and also makes sense as doomscroll clickbait). I've seen lot of mainstream articles ... (read more)

It maybe better first to force person to undergo a course with psychologist and psychiatrist and only after that allow a suicide..

Something along these lines seems essential. It may be better to talk of the right to informed suicide. Arguably being informed is what makes it truly voluntary.

Forcing X to live will be morally better than forcing Y to die in circumstances where X's desire for suicide is ill-informed (let's assume Y's desire to live is not). X's life could in fact be worth living - perhaps because of a potential for future happiness, rathe... (read more)