DanielFilan comments on Stupid Questions December 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (341)
Here's the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn't take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don't exist - not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die - not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
I'm not sure what you can mean by this comment, especially "the whole problem". My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
... I promise that you aren't going to be able to perform a test on a qubit
that you can expect to tell you with 100% certainty that
, even if you have multiple identical qubits.
This wasn't my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it's like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can't distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn't be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you're pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don't take the suicide bet in the single-world case, I think that we probably don't.