I would be very surprised to find that a universe whose particles are arranged to maximize objective good would also contain unpaired sadists and masochists.
The problem is that neither you nor BrianPansky has proposed a viable objective standard for goodness. BrianPansky said that good is that which satisfies desires, but proposed no objective method for mediating conflicting desires. And here you said “Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved” but proposed no way for resolving conflicts between different people’s ethical preferences. Even if satisfying desires were an otherwise reasonable standard for goodness, it is not an objective standard, since different people may have different desires. Similarly, different people may have different ethical preferences, so an individual’s ethical preference would not be an objective standard either, even if it were otherwise a reasonable standard.
You seem to be asking a question of the form, "But if we take all the evil out of the universe, what about evil?"
No, I am not asking that. I am pointing out that neither your standard nor BrianPansky’s standard is objective. Therefore neither can be used to determine what would constitute an objectively maximally good universe nor could either be used to take all evil out of the universe, nor even to objectively identify evil.
On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking...
The argument that 50 years of torture of one person is preferable to 3^^^3 people suffering dust specs presumes utilitarianism. A non-utilitarian will not necessarily prefer torture to dust specs even if his/her critical thinking skills are up to par.
There is no democracy in the US
No democracy, really? Or would it be more accurate to say that US democracy falls short of some sort theoretical ideal?
Yep, I agree. The second sentence of this comment's grandparent was intended to support that conclusion, but my wording was sloppily ambiguous. I made a minor edit to it to (hopefully) remove the ambiguity.
Yep. This could be because Nick Bostrom's original simulation argument focuses on ancestor simulations, which pretty much implies that the simulating and simulated worlds are similar. However here, in question 11, Bostrom explains why he focused on ancestor simulations and states that the argument could be generalized to include simulations of worlds that are very different from the simulating world.
Interesting paper. But, contrary to the popular summary in the first link, it really only shows that simulations of certain quantum phenomena are impossible using classical computers (specifically, using the Quantum Monte Carlo method). But this is not really surprising - one area where quantum computers show much promise is in simulating quantum systems that are too difficult to simulate classically.
So, if the authors are right, we might still be living in a computer simulation, but it would have to be one running on a quantum computer.
Thanks - I enjoyed the story. It was short but prescient. The article that inspired it was interesting as well.
I'm a two-boxer. My rationale is:
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have "enormous confidence" in Omega's power to predict your choices, and that this being has "often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)", and that the being has "often correctly predicted the choices of other people, many who are similar to you". So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome "b" or "c" listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that "two-boxers believe that all 4 are possible" (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that "one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)". But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.
Yep.
And, in the Maps of Meaning lecture series, Peterson gives a shout-out to Rowling's Harry Potter series as being an excellent example of a retelling of an archetypal myth. So, it was a good choice of material for Yudkowsky to use as he did.
Were there really a lot of people in whom the SpaceX launch and the landing of the boosters inspired confusion and terror? I have not seen any of that. The reactions that I have observed have ranged all the way from disinterest to (as you put it) a palpable zest, but I have not observed anyone who felt terror or confusion.