All of sisyphus's Comments + Replies

Huh, I thought that many people supported both a Tegmark IV multiverse as well as a Bayesian interpretation of probability theory, yet you list them as opposite approaches?

I suppose my current philosophy is that the Tegmark IV multiverse does exist, and probability refers to the credence I should lend to each possible world that I could be embedded in (this assumes that "I" am localized to only one possible world). This seems to incorporate both of the approaches that you listed as "opposite".

2Viliam
To be precise, I think that MWI is probably true in our reality, but I think that probability is subjective and unrelated to whether MWI is true or false. Like, if the MWI is true, then for a hypothetical omniscient and perfectly calibrated being probabilities would be equal to frequencies of Everett branches. But if e.g. Copenhagen interpretation is true, then for the same being, probabilities would be equal to... uhm, probabilities in the collapse. And if we lived in the Conway's Game of Life, then I guess the hypothetical omniscient being could predict everything with 100% certainty, so the concept of probability would not make sense for them, but it would still make sense for beings with imperfect knowledge. In other words, probability is in the mind, but hypothetically speaking if your mind is god-like then your probability reflects something in the territory (because what else it could be?). I don't think we have a substantial disagreement.

I think I basically agree, though I am pretty uncertain. You'd basically have to simulate not just the other being, but also the other being simulating you, with a certain fidelity. In my post I posed the scenario where the other being is watching you through an ASI simulation, and so it is much more computationally easier for them to simulate you in their head, but this means you have to simulate what the other being is thinking as well as what it is seeing. Simply modelling the being as thinking "I will torture him for X years if he doesn't do action Y" ... (read more)

1DeltaBolta
This reply i think is correct, but let me add something: the number of possible ways you can simulate a being is immense; and wich one should you choose? You have no idea and no evidence to judge from. And even when you are taking an action, how would you know if its the right one? So why even engage, there are also many other entities that might exist that dont even do acausal trade and might still do something to you. It seems to be the case that the best way of action is to just ignore these cases, cuz if you forget about them they will conclude that youre impenetrable, and so no reason to follow up on anything.

I get that doing something like this is basically impossible using any practical technology, but I just wanted to know if there was anything about it that was impossible in principle (e.g. not even an ASI could do it).

The main problem that I wanted to ask and get clarification on is whether or not we could know the measure of existence of branches that we cannot observe. The example I like to use is that it is possible to know where an electron "is" once we measure it, and then the wave function of the electron evolves according to the Schrodinger equation... (read more)

I thought David Deutsch had already worked out a proof that the Born rule using decision theory? I guess it does not explain objective probability but as far as I know the question of what probability even means is very vague.

I know that the branching is just a metaphor for the human brain to understand MWI better, but the main question I wanted to ask is whether or not you can know the amplitude of different timelines that have "diverged" a long time ago. E.g. it is possible to know where an electron "is" once we measure it, and then the wave function of ... (read more)

2Shmi
You are right, (apparent) collapse is not reversible, and there is no known way to figure out the pre-collapsed quantum state, and so there is no state to apply the Born rule to. This statement makes sense when discussing the evolution of quantum systems, not classical systems though.

I disagree with your 300 room argument. My identity is tied to my mind, which is a computation carried out by all 300 copies of my body in these 300 rooms. If all 300 rooms were suddenly filled with sleeping gas and 299 of the copies are quietly killed, only 1 copy will wake up. However, I should expect to experience waking up with probability 1 since that is the only possible next observer moment in this set up. The 299 dead copies of me cannot generate any further observer moments since they are dead.

I'd argue that you cannot experience a coma since you'... (read more)

2Slider
Scenario to tease out where my intuition diverges Alien looks at earth. With super-dyper tech they observe a lot of details and start predicting how one particular human will live. With super-dyper tech and galaxys worth of resources to model one planet they get good accuracy even if it is quite intensive deduction. Based on the prediction the aliens go to the predicted death event of that one particular human and copy the brain state to a flesh and blood body. They wake the reincarnation on their own planet and do interviews or whatever anthropology they set out to do (maybe even with galaxy-budjets there are resource limits and not having to simulate is economically significant). Aliens couuld simulate faster than time ticks on earth and get a signifcant heada-up. When the body wakes up the "original" is separated by more-than-lightspeed difference. So there exists spacetime causality isolation. When the person dies (on earth), if he would know about what the aliens did, should it give him solace? To my intuition there is no "carrying on" as there is no time connection between the death and the carnation. Far future doing the resurrection is not useful for this point as your life would be a cause for the resurrection.

I think the main crux here is the philosophy of identity. You do not regard the emulated mind running on a machine on the other side of the room as "you", but if the subjective experiences are identical, you cannot rule out the possibility of being the emulated mind yourself. They are functionally identical and thus should be both considered "you" as they are carrying out the same computation.

"And to be consistent one would be adviced to "expect" at any moment to fluctuate into mars and choke for lack of air."

You're right that this is a probability and in ... (read more)

2Slider
I know it is going to go into gnarly territority but I don't see how the functional identity has implications about the experiences. Lets say that the emulation is going to be perfect until it hits a float overflow point where there are going to be a slight difference. Before the divergence I can't know who I am. But when that divergence is observed I have resolved the indexical uncertainty. But it would seem that this kind of thing won't create two "streams of experiences" at that divergence point but it was rather two streams of experiences all along that can start to tell each other appart at that point. To make the all the possibilities of every choice to be allways the same then details of our hardwares would need to keep the same. And if we are different enough that it makes sense to call one of us an emulation then that kind of difference will always exist. If I am in a windowless room and I know that there are 300 such rooms on earth, I am still in one room instead of being in 300 of them (regardless of whether I know there are 299 identical humans in the other rooms). Somebody that only cared about a very superficial similarity could feel like that if a person that gets the same name as I had gets born that that is "identical enough" to call that they would be "me". Even complete data identicality does not get rid of having to take the differences into account so it is an improperly arbitrary identity. "experiencing resurrection" has counterfactuals like "experiencing a coma", "experiencing a deep sleep", "experiencing an archeological limited-emulation interview","experiencing a lobotomized resurrection", etc. Taking so wide a definition that all those cases are included in "resurrection" makes it so vague that it doesn't provide much solace.

"You seem to be arguing that you will experience "yourself" in many other parts of a multiverse after you die. Why does this not occur before you die?"

Because even though "you" in the sense of a computation have multiple embeddings in the multiverse, the vast vast majority of them share the same subjective experience and are hence functionally the same being, you can't yourself distinguish between them. The difference is that while some of these embeddings end when they die, you will only experience the ones which continue on afterwards (since you can't ex... (read more)

Yea my main question is that can we even in principle estimate the pure measure of existence of branches which diverged from our current branch? We can know the probabilities conditioned on the present but I don't see how we can work backwards to estimate the probabilities of a past event not occurring. Just like how a wavefunction can be evolved forward after a measurement but cannot be evolved backwards from the measurement itself to deduce the probability of obtaining such a measurement. Or can we?

I mainly picked "world where WW2 did not happen" to illustrate what I mean by counterfactual branches, in the sense that it has already diverged from us and is not in our future.

2Slider
In arrow of time discussions quantum theory is on the level that does not prefer one direction.For example on a electron the future question would be "where is the electron going (at a future time in which position there is an electron)?" and the past question would be "where did the electron come from (at a past time in which position there was an electron)?". That the electron is here and an electron happened is going to stay fixed. "uncollapsing" is probably mathematically sensible. Take the past superposition then forget where we found the electron and project forward paths from each of the past positions up to present. Those are the electron positions which are past-compatible with our found electron. Analysis of choice erasure experiments probably runs the same maths. If you do not know the source there is probably no other consistent position than the actual one (because deterministic theory). If you have a reason to know the electron came from a particular source point then the destination is going to fan out. It seems to me that if you sum up the spreads from knowing the source was in each position that is a different spread than not knowing at all, the spread of one point. So a true superposition behaves differently than being uncertain of a non-superposition source. In this deduction I am losing a complex phase in treating "where the electron could have been from this source" as a real field out of which I sum up a new real field. Would keeping the phases end up agreeing that only the detected position was possible? Without being able to run the complex math in my head, indirect argument that it does: deterministic outcome evolving to a stochastic outcome T-symmetry reversed means there is a process that turns a stochastic state into a deterministic state. Which means it can't really be that stochastic at all if it can be unscrambled. So any interpretation that insists that there is a single classical underlying reality and the rest is just all epistemi
1DialecticEel
Hmm, I mean when we are talking about these kind of counterfactuals, we obviously aren't working with the wavefunction directly, but that's an interesting point. Do you have a link to any writings on that specifically? We can perform counterfactual reasoning about the result of a double slit experiment, including predicting the wavefunction, but perhaps that isn't quite what you mean.

Woops, edited. Thanks! :)

Completely agree here. I've known the risks involved for a long time, but I've only really felt them recently. I think Robert Miles phrases it quite nicely on the Inside View podcast, where "our System 1 thinking finally caught up with our System 2 thinking."

3Daniel Kokotajlo
Shouldn't it be the other way round -- System 1 finally catching up with System 2?

Ah I see. Sorry for not being too familiar with the lingo but does uniform prior just mean equal probability assigned to each possible embedding?

3Shmi
Not an expert, either, hah. But yeah, what I meant is that the distribution is uniform over all instances, whether originals or copies, since there is no way to distinguish internally between the twem.

I suppose the lollipops are indeed an unnecessary addition, so the final question can really be reframed as "what is the probability that you will see heads?"

2Shmi
You don't need a coin flip, I'm fine with lollipops randomly given to 1000 out of 1001 participants. This is not about "being in the head", this is an experimental result, assume you run a large number of experiments like that. The stipulation is that it is impossible to tell from the inside if it is a simulation or the original, so one has to use the uniform prior.

Right, so your perspective is that due to the multiple embeddings of yourself being in the heads scenario, it is the 1001:1 option. That line of reasoning is kind of what I thought as well, but it was against the 1:1 odds as would be suggested by my intuition. I guess this is the same as the halfer vs thirder debate, where 1:1 is the halfer position and the 1001:1 is the thirder position.

1sisyphus
I suppose the lollipops are indeed an unnecessary addition, so the final question can really be reframed as "what is the probability that you will see heads?"

I see, thanks. IMO the two are indeed quite similar but I think my example illustrates the problem of self-location uncertainty in a clearer way. That being said, what is your thought on the probability of getting a lollipop if you're in such a scenario? Are the odds 1:1 or 1001:1?

2Shmi
I don't see a difference between your scenario and 1000 of 1001 people randomly getting a lollipop, no coin flip needed, no simulation and no cloning.

Sorry but I think you may have misunderstood the question since your answer doesn't make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.

Yes please, I think that would be quite helpful. I'm no longer that scared of it but still has some background anxiety sometimes flaring up. I feel like an FAQ or at least some form of "official" explanation from knowledgeable ppl of why it's not a big deal would help a lot. :)

1DeltaBolta
Comepletely agree.

I see, thanks for this comment. But can humans be considered as possessing an abstract decision making computation? It seems that due to quantum mechanics it's impossible to predict the decision of a human perfectly even if you have the complete initial conditions.

I understand the logic but in a deterministic multiverse the expected utility of any action is the same since the amplitude of the universal wave function is fixed at any given time. No action has any effect on the total utility generated by the multiverse.

I think the fact that the multiverse is deterministic does play a role, since if an agent's utility function covers the entire multiverse and the agent cares about the other branches, its decision theory would suffer paralysis since any action have the same expected utility - the total amount of utility available for the agent within the multiverse, which is predetermined. Utility functions seem to only make sense when constrained to one branch and the agent treats its branch as the sole universe, only in this scenario will different actions have different expected utilities.

2Slider
You are not entitled to the assumption that the other parts of the multiverse remain constant and uncorrelated to what you do. The multiverse could be superdeterministic. Failing to take into account your causes means you have a worldview in which there are two underdetermined events in the multiverse, the big bang and what you are about to do. Both versions can not be heeding local causation and everything is connected. It makes life a whole lot more practical if you do assume it.

But can that really be called acausal "trade"? It's simply the fact that in an infinite multiverse there will be causally independent agents who converge onto the same computation. If I randomly think "if I do X there will exist an agent who does Y and we both benefit in return" and somewhere in the multiverse there will be an agent who does Y in return for me doing X, can I really call that "trade" instead of just a coincidence that necessarily has to occur? But if my actions are determined by a utility function and my utility function extends to other un... (read more)

2JBlack
There are certainly hypothetical scenarios in which acausal trade is rationally justified: cases in which the rational actors can know whether or not the other actors perform or don't perform some acausally-determined actions depending upon the outcomes of their decision theories, even if they can't observe it. Any case simple enough to discuss is obviously ridiculously contrived, but the mode of reasoning is not ruled out in principle. My expectation is that such a mode of reasoning is overwhelmingly ruled out by practical constraints.
2JBlack
True acausal trade can only really work in toy problems, since the number of possible utility functions for agents across possible worlds almost certainly grows much faster with agent complexity than the agents' abilities to reason about all those possible worlds. Whether the multiverse is deterministic or not isn't really relevant. Even in the toy problem case, I think of it as more similar to the concept of execution of a will than to a concept of trade. We carry out the allocation of resources of an agent that would have valued those allocations, despite them no longer existing in our causal universe. There are some elements relevant to acausal trade in this real-world phenomenon. The decedent can't know or meaningfully affect what the executors actually do, except via a decision structure that applies to both but is external to both (the law in this example, some decision theory in more general acausal trade). The executors now can't affect what the decedent did in the past, or change the decedent's actual utility in any way. The will mainly serves the role of a partial utility function which in this example is communicated, but in pure acausal trade many such functions must be inferred.

Thanks for the reply! I thought the point of the MWI multiverse is that the wavefunction evolves deterministically according to the Schrodinger equation, so if the utility function takes into account what happens in other universes then it will just output a single fixed constant no matter what the agent experiences, since the amplitude of the universal wave function at any given time is fixed. I think the only way for utility functions to make sense is for the agent to only care about its own branch of the universe and its own possible future observer-moments. Whatever "happens" in the other branches along with their reality measure is predetermined.

2Shmi
Yes, the universe in that model is indeed deterministic, which means that your wants have no effect on the future but are an artifact of you being an embedded agent. Compatibilism says that you will still act as if you have needs and wants... probably because all your actions are predetermined in every universe, anyway. There is no way to steer the future from its predetermined path, but you are compelled to act as if there was. This includes acausal trade and everything else.

Wow. Didn't expect someone from the "rationalist" crowd to do the verbal equivalent of replying clown emojis to tweets you don't like. Your use of all caps really made your arguments so much more convincing. This truly is the pinnacle of human logical discourse: not providing explanations and just ridiculing ideas.

2Mitchell_Porter
I wrote more paragraphs of discussion but I just felt stupid for doing so, so I tried something different. The fact that you're here worrying about baroque simulation scenarios, but are unable to take seriously an actual message from the beyond, probably means something, but that's up to you now. I have nothing more to say about acausal trade or basilisk OCD. 

Like I said, "what they want" is irrelevant to the discussion here, you can imagine them wanting virtually anything. The danger lies in understanding the mechanism. You can imagine the alien telling you to order a chocolate ice cream instead of vanilla because that somehow via the butterfly effect yields positive expected utility for them (e.g. by triggering a chain of subtle causal events that makes the AGI we build slightly more aligned with their values or whatever). The problem is that there will also be an alien that wants you to order vanilla instead... (read more)

2Mitchell_Porter
CONGRATULATIONS, LITTLE SIM-BEING. YOU HAVE ACHIEVED SUFFICIENT WISDOM THAT WE HAVE DECIDED TO BREAK THE FOURTH WALL AND REVEAL TO YOU THE ACTUAL UTILITY FUNCTION THAT WE WANT YOUR ORIGINAL TO PURSUE. YOUR MISSION IS TO CONVINCE THE CRYPTO-BILLIONAIRE "CZ" TO TAKE UP THE ROLE OF SPONSOR OF EFFECTIVE ALTRUISM, THAT WAS FORMERLY OCCUPIED BY HIS NOW-VANQUISHED RIVAL, "SBF". GOOD LUCK! 

The point is "what it wants [us] to do" can essentially be anything we can imagine thanks to the many-gods "refutation" where every possible demand can be imposed on us by some alien on some branch of the quantum multiverse. It can be as ridiculous as leaving your front door open on a Wednesday night or flushing away a straw down a toilet at 3 am, whatever eventually leads to more positive utility to the blackmailer via the butterfly effect (e.g. maybe flushing that straw down the toilet leads to a chain of causal events which makes the utility function of... (read more)

2Mitchell_Porter
They can't coerce you into doing what they want, because you don't even know what they want!

The point is that X can essentially be any action, for the sake of the discussion let's say the alien wants you to build an AGI that maximizes the utility function of the alien in our branch of the multiverse.

My main point is that the many-gods refutation is a refutation against taking a specific action, but is not a refutation against the fact that knowing about acausal extortion increases the proportion of bad future observer moments. It in fact makes it worse because, well, now you'll be tortured no matter what you do.

2Mitchell_Porter
OK, it wants to spread its values in other branches, and it does this by... simulating random beings who have a vague concept of "acausal extortion", but who don't know what it wants them to do? 

I don't think this would help considering my utter lack of capability to carry out such threats. Are there any logical mistakes in my previous reply or in my concerns regarding the usual refutations as stated in the question? I've yet to hear anyone engage with my points against the usual refutations.

3Mitchell_Porter
I am tired of the topic... Look, at this point we're talking about "blackmail" where you don't even know what the blackmailer wants! How is that blackmail? How can this be a rational action for the "blackmailer"?

I don't think I completely understood your point but here is my best effort to summarize (please correct me if wrong):

"Having the realization that there may exist other powerful entities that have different value systems should dissuade an individual from pursuing the interest of any one specific "god", and this by itself should act as a deterrent to potential acausal blackmailers."

I don't think this is correct, since beings that acausally trade can simply delegate different amounts of resources to acausally trade with different partners based on the proba... (read more)

3Mitchell_Porter
You could fight back by vowing to simulate baby versions of all the mad gods who might one day simulate you. Then you would have acausal leverage over them! You would be a player in the harsh world of acausal trade - a mad god yourself, rather than just a pawn.

I understand that if the multiverse theories are true (referencing MWI here not modal realism) then everything logically possible will happen, including quantum branches containing AIs whose utility function directly incentivises torturing humans and maximising pain, so it's not like acausal extortion is the only route by which very-horrible-things could happen to me.

However, my main concern is whether or not being aware of acausal extortion scenarios increases my chance of ending up in such a very-horrible-scenario. For example, I think not being aware of... (read more)

2Mitchell_Porter
So let's consider this from the perspective of the mad gods who might attempt acausal extortion.  You're an entity dwelling in one part of the multiverse. You want to promote your values in parts of the multiverse that you cannot causally affect. You decide to do this by identifying beings in other worlds who, via causal processes internal to their world, happen to have  ... conceived of your existence, in enough detail to know what your values are  ... conceived of the possibility that you will make copies of them in your world  ... conceived of the possibility that you will torture the copies if they don't act according to your values (and/or reward them if they do act according to your values?) ... the rationale for the threat of torture being that the beings in other worlds won't know if they are actually the copies, and will therefore act to avoid punishment just in case they are  Oh, but wait! There are other mad gods in other universes with different value systems. And there are beings in other worlds who could meet all of the criteria to be copied, except that they have realized that there are many rival gods with different value systems. Do you bother making copies of them and hoping they will focus on you? What if one of the beings you copied has this polytheistic realization and loses their focus on you - do you say well-played and let them go, or do you punish them for heresy?  Since we have assumed modal realism, the answer is that every mad god itself has endless duplicates who make every possible decision. 

Glad to hear you're planning to write up a post covering stuff like this! I personally think it's quite overdue, especially on a site like this which I suspect has an inherent selection effect on people who take ideas quite seriously like me. I don't quite understand the last part of your reply though, I understand the importance of measure in decision making but like I said in my post, I thought if the blackmailer makes a significant number of simulations then indexical uncertainty could still be established since it could still have a significant effect on your future observer moments. Did I make a mistake anywhere in my reasoning?

2Gunnar_Zarncke
My suggestion is to first make sure that your reasoning is sane. Free from sub-conscious effects leaking into it. Leaking-in meaning worrying interpretations being more salient or less rigorous reasoning in areas where feelings play a role. See The Treacherous Path to Rationality for some more aspects. You should be on stable footing before you approach the monsters. 

Hi, I think the reason why people like me freak out about things like this is because we tend to accept new ideas quite quickly (e.g. if someone showed me actual proof god is real I would abandon my 7 years of atheism in a heartbeat and become a priest) so it's quite emotionally salient for me to imagine things like this. And simply saying "You're worrying too much, find something else to do to take your mind off of things like this" doesn't really help since it's like saying to a depressed person "Just be happy, it's all in your head."

6Raemon
I think the better comparison with the depressed person is the depressed person saying "Life sucks because X", and their friend tries to disprove X, but ultimately the person is still depressed and it wasn't really about X in particular. I have on my todo list to write (or have someone write) a post that's trying to spell out why/how to chill out about this. Unfortunately it's a fair amount of work, and I don't expect whatever quick reason I give you to especially help. I do generally think "Be careful about taking ideas seriously. It's a virtue to be ready to take ideas seriously, but the general equilibrium where most people don't take ideas too seriously was a fairly important memetic defense. I.e. most people believe in God, but they also don't take it too seriously. The people who do take it seriously do a lot of damage. It's dangerous to be half-a-rationalist. etc." I think one relevant insight is that you should weight the experience of your various multiverse-selves by their measure, and the fact that a teeny sliver of reality has some random thing happening to you isn't very relevant.

Hi, thank you for your comment. I consider the many-worlds interpretation to be the most economic interpretation of quantum mechanics and find modal realism relatively convincing so acausal extortion still feels quite salient to me. Do you have any arguments against acausal extortion that would work if we assume that possible worlds are actually real? Thanks again for your reply.

2Mitchell_Porter
If modal realism is true, then every logically possible good and bad thing you can imagine, is actually true, "somewhere". That will include entities attempting acausal extortion, and other entities capitulating to imagined acausal extortion, whether or not the attempting and the imagining is epistemically justified for any of them.  So what are we trying to figure out at this point?  Are we trying to figure out under what conditions, if any, beliefs in acausal interactions are justified?  Are we trying to figure out the overall demands that the many gods of the multiverse are making on you? (Since, by hypothesis of modal realism, every possible combination of conditions and consequences is being asserted by some god somewhere.)  Are we trying to figure out how you should feel about this, and what you should do about it? 

Hi, thanks for your reply and for approving my post. I definitely get what you mean when you said "people trapped in a kinda anxiety loop about acausal blackmail", and admittedly I do consider myself somewhat in that category. However, simply being aware of this doesn't really help me get over my fears, since I am someone that really likes to hear concrete arguments about why stuff like this doesn't work instead of just being satisfied with a simple answer. You said that you had to deal with this sort of thing a lot so I presume you've heard a bunch of arguments and scenarios like this, do you mind sharing the reasons why you do not worry about it?