The article said the leverage penalty "[penalizes] hypotheses that let you affect a large number of people, in proportion to the number of people affected." If this is all the leverage penalty does, then it doesn't matter if it takes 3^^^3 atoms or units of computation, because atoms and computations aren't people.
That said, the article doesn't precisely define what the leverage penalty is, so there could be something I'm missing. So, what exactly is the leverage penalty? Does it penalize how many units of computation, rather than people, you can affect? This sounds much less arbitrary than the vague definition of "person" and sounds much easier to define: simply divide the prior of a hypothesis by the number of bits flipped by your actions in it and then normalize.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Doesn't this mean that you should deliberately avoid finding out whether cryonics can actually preserve your information in a retrievable way, because if it can't it would eliminate the vast majority of the worlds that would have brought you back? Whereas if you don't know it remains undetermined. Am I getting this right?
You're confusing subjective probability and objective quantum measure. If you flip a quantum coin, half your measure goes to worlds where it comes up heads and half goes to where it comes up tails. This is an objective fact, and we know it solidly. If you don't know whether cryonics works, you're probably still already localized by your memories and sensory information to either worlds where it works or worlds where it doesn't; all or nothing, even if you're ignorant of which.