Dmitriy Vasilyuk

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

That's perfect, I was thinking along the same lines, with a range of options available for sale, but didn't do the math and so didn't realize the necessity of dual options. And you are right of course, there's still quite a bit of arbitrariness left. In addition to varying the distribution of options there is, for example, freedom to choose what metric the forecasters are supposed to optimize. It doesn't have to be EV, in fact in real life it rarely should be EV, because that ignores risk aversion. Instead we could optimize some utility function that becomes flatter for larger gains, for example we could use Kelly betting.

Learning that "I am in the sleeping beauty problem" (call that E) when there are N people who aren't is admittedly not the best scenario to illustrate how a normal update is factored into the SSA update, because E sounds "anthropicy". But ultimately there is not really much difference between this kind of E and the more normal sounding E* = "I measured the CMB temperature to be 2.7K". In both cases we have:

  1. Some initial information about the possibilities for what the world could be: (a) sleeping beauty experiment happening, N + 1 or N + 2 observers in total; (b) temperature of CMB is either 2.7K or 3.1K (I am pretending that physics ruled out other values already).
  2. The observation: (a) I see a sign by my bed saying "Good morning, you in the sleeping beauty room"; (b) I see a print-out from my CMB apparatus saying "Good evening, you are in the part of spacetime where the CMB photons hit the detector with energies corresponding to 3.1K ".

In either case you can view the observation as anthropic or normal. The SSA procedure doesn't care how we classify it, and I am not sure there is a standard classification. I tried to think of a possible way to draw the distinction, and the best I could come up with is:

Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are.

I wonder if that's the definition you had in mind when you were asking about a normal update, or something like it. In that case, the observations in 2a and 2b above would both be non-anthropic, provided N is big and we don't think that the temperature being 2.7K or 3.1K would affect how many observers there would be. If, on the other hand, N = 0 like in the original sleeping beauty problem, then 2a is anthropic. 

Finally, the observation that you survived the Russian roulette game would, on this definition, similarly be anthropic or not depending on who you put in the reference class. If it's just you it's anthropic, if N others are included (with N big) then it's not.

The definition in terms of "all else equal" wasn't very informative for me here.

Agreed, that phrase sounds vague, I think it can simply be omitted. All SSA is trying to say really is that P(E|i), where i runs over all possibilities for what the world could be, is not just 1 or 0 (as it would be in naive Bayes), but is determined by assuming that you, the agent observing E, is selected randomly from the set of all agents in your reference class (which exist in possibility i). So for example if half such agents observe E in a given possibility i, then SSA instructs you to set the probability of observing E to 50%. And in the special case of a 0/0 indeterminacy it says to set P(E|i) = 0 (bizarre, right?). Other than that, you are just supposed to do normal Bayes.

What you said about leading to UDT sounds interesting but I wasn't able to follow the connection you were making. And about using all possible observers as your reference class for SSA, that would be anathema to SSAers :)

You have described some bizarre issues with SSA, and I agree that they are bizarre, but that's what defenders of SSA have to live with. The crucial question is:

For the anthropic update, yes, but isn't there still a normal update?

The normal updates are factored into the SSA update. A formal reference would be the formula for P(H|E) on p.173 of Anthropic Bias, which is the crux of the whole book. I won't reproduce it here because it needs a page of terminology and notation, but instead will give an equivalent procedure, which will hopefully be more transparently connected with the normal verbal statement of SSA, such as one given in https://www.lesswrong.com/tag/self-sampling-assumption:

SSA: All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.

That link also provides a relatively simple illustration of such an update, which we can use as an example:

Notice that unlike SIA, SSA is dependent on the choice of reference class. If the agents in the above example were in the same reference class as a trillion other observers, then the probability of being in the heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ 1/3, similar to SIA.

In this case, the reference class is not trivial, it includes N + 1 or N + 2 observers (observer-moments, to be more precise; and N = trillion), of which only 1 or 2 learn that they are in the sleeping beauty problem. The effect of learning new information (that you are in the sleeping beauty problem or, in our case, that the gun didn't fire for the umpteenth time) is part of the SSA calculation as follows:

  • Call the information our observer learns E (in the example above E = you are in the sleeping beauty problem)
  • You go through each possibility for what the world might be according to your prior. For each such possibility i (with prior probability Pi) you calculate the chance Qi of having your observations E assuming that you were randomly selected out of all observers in your reference class (set Qi = 0 if there no such observers).
  • In our example we have two possibilities: i = A, B, with Pi = 0.5. On A, we have N + 1 observers in the reference class, with only 1 having the information E that they are in the sleeping beauty problem. Therefore, QA = 1 / (N + 1) and similarly QB = 2 / (N + 2).
  • We update the priors Pi based on these probabilities, the lower the chance Qi of you having E in some possibility i, the stronger you penalize it. Specifically, you multiply Pi by Qi. At the end, you normalize all probabilities by the same factor to make sure they still add up to 1. To skip this last step, we can work with odds instead.
  • In our example  the original odds of 1:1 then update to QA:QB, which is approximately 1:2, as the above quote says when it gives "≈ 1/3" for A.

So if you use the trivial reference class, you will give everything the same probability as your prior, except for eliminating worlds where noone has your epistemic state and renormalizing. You will expect to violate bayes law even in normal situations that dont involve any birth or death. I don't think thats how its meant to work.

In normal situations using the trivial class is fine with the above procedure with the following proviso: assume the world is small or, alternatively, restrict the class further by only including observers on our Earth, say, or galaxy. In either case, if you ensure that at most one person, you, belongs to the class in every possibility i then the above procedure reproduces the results of applying normal Bayes. 

If the world is big and has many copies of you then you can't use the (regular) trivial reference class with SSA, you will get ridiculous results. A classic example of this is observers (versions of you) measuring the temperature of the cosmic microwave background, with most of them getting correct values but a small but non-zero number getting, due to random fluctuations, incorrect values. Knowing this, our measurement of, say, 2.7K wouldn't change our credence in 2.7K vs some other value if we used SSA with the trivial class of copies of you who measured 2.7K. That's because even if the true value was, say, 3.1K there would still be a non-zero number of you's who measured 2.7K. 

To fix this issue we would need to include in your reference class whoever has the same background knowledge as you, irrespective of whether they made the same observation E you made. So all you's who measured 3.1K would then be in your reference class. Then the above procedure would have you severely penalize the possibility i that the true value is 3.1K, because Qi would then be tiny (most you's in your reference class would be ones who measured 3.1K).

But again, I don't want to defend SSA, I think it's quite a mess. Bostrom does an amazing job defending it but ultimately it's really hard to make it look respectable given all the bizarre implications imo. 

Can you spell that out more formally? It seems to me that so long as I'm removing the corpses from my reference class, 100% of people in my reference class remember surviving every time so far just like I do, so SSA just does normal bayesian updating.

Sure, as discussed for example here: https://www.lesswrong.com/tag/self-sampling-assumption, if there are two theories, A and B, that predict different (non-zero) numbers of observers in your reference class, then on SSA that doesn't matter. Instead, what matters is what fraction of observers in your reference class have the observations/evidence you do. In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors.

This is precisely the situation we are in for the case at hand, namely when we make the assumptions that:

  • The reference class consists of all survivors like you (no corpses allowed!)
  • The world is big (so there are non-zero survivors on both A and B).

So the posteriors are again equal to the priors and you should not believe B (since your prior for it is low).

I did mean to use the trivial reference class for the SSA assesment, just not in a large world. And, it still seems strange to me that it would change the conclusion here how large the world is. 

I completely agree, it seems very strange to me too, but that's what SSA tells us. For me, this is just one illustration of serious problems with SSA, and an argument for SIA. 

If your intuition says to not believe B even if you know the world is small then SSA doesn't reproduce it either. But note that if you don't know how big the world is you can, using SSA, conclude that you now disbelieve the combination small world + A, while keeping the odds of the other three possibilities the same - relative to one another - as the prior odds. So basically you could now say: I still don't believe B but I now believe the world is big.

Finally, as I mentioned, I don't share your intuition, I believe B over A if these are the only options. If we are granting that my observations and memories are correct, and the only two possibilities are: I just keep getting incredibly lucky OR "magic", then with every shot I'm becoming more and more convinced in magic.

Reference class issues.

SSA, because that one me is also 100% of my reference class.

I think it's not necessarily true that on SSA you would also have to believe B, because the reference class doesn't necessarily have to involve just you. Defenders of SSA often have to face the problem/feature that different choices of a reference class yield different answers. For example, in Anthropic Bias Bostrom argues that it's not very straightforward to select the appropriate reference class, some are too wide and some (such as the trivial reference class) often too narrow. 

The reference class you are proposing for this problem, just you, is even narrower than the trivial reference class (which includes everybody in your exact same epistemic situation so that you couldn't tell which one you are.) It's arguably not the correct reference class, given that even the trivial reference class is often too narrow.

Reproducing your intuitions.

It seems to me that your intuition of not wanting to keep playing can actually be reproduced by using SSA with a more general reference class, along with some auxiliary assumptions about living in a sufficiently big world. This last assumption is pretty reasonable given that the cosmos is quite likely enormous or infinite. It implies that there are many versions of Earth involving this same game where a copy of you (or just some person, if you wish to widen the reference class beyond trivial) participates in many repetitions of the Russian roulette, along with other participants who die at the rate of 1 in 6.

In that case, after every game, 1 in 6 of you die in the A scenario, and 0 in the B scenario, but in either scenario there are still plenty of "you"s left, and so SSA would say you shouldn't increase your credence in B (provided you remove your corpses from your reference class, which is perfectly fine a la Bostrom).

My take on the answer.

That said, I don't actually share your intuition for this problem. I would think, conditional on my memory being reliable etc., that I have better and better evidence for B with each game. Also, I fall on the side of SIA, in large part because of the weird reference class issues involved in the above analysis. So to my mind, this scenario doesn't actually create any tension.

Answer by Dmitriy Vasilyuk20

I find this question really interesting. I think the core of the issue is the first part:

First, how can we settle who has been a better forecaster so far? 

I think a good approach would be betting related. I believe different reasonable betting schemes are possible, which in some cases will give conflicting answers when ranking forecasters. Here's one reasonable setup:

  • Let A = probability the first forecaster, Alice, predicts for some event.
  • Let B = probability the second forecaster, Bob, assigns (suppose B > A wlog).
  • Define what's called an option: basically a promissory note to pay 1 point if the event happens, and nothing otherwise. 
  • Alice will write and sell N such options to Bob for price P each, with N and P to be determined.
  • Alice's EV is positive if P > A (she expects pay out A points/option on average).
  • Bob's EV is positive if P < B (he expects to be paid B points/option on average). 

A specific scheme can then stipulate the way to determine N and P. After that comparing forecasters, after a number of events, would just translate to comparing points.

As a simple illustration (without claiming it's great), here's one possible scheme for P and N:

  • Alice and Bob split the difference and set P = 1/2 (A + B).
  • N = 1.

One drawback of that scheme is that it doesn't punish too much a forecaster who erroneously assigns a probability of 0% or 100% to an event.

A different structure of the whole setup would involve not two forecasters betting against each other, but each forecaster betting against some "cosmic bookie". I have some ideas how to make that work too.

And what does this numerical value actually mean, as landing on Mars is not a repetitive random event nor it is a quantity which we can try measuring like the radius of Saturn?

I don't see how we could assign some canonical meaning to this numerical value. For every forecaster there can always be a better one in principle, who takes into account more information, does more precise calculations, and happens to have better priors (until we reach the level of Laplace's demon, at which point probabilities might just degenerate into 0 or 1). 

If that's true then such a numerical value would seem to just be a subjective property specific to a given forecaster, it's whatever that forecaster assigns to the event and uses to estimate how many points (or whatever other metrics she cares about) she will have in the future.