Comment author: Mallah 07 April 2010 05:50:08PM *  0 points [-]

Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.

I just wanted to follow up on this remark I made. There is a suble anthropic selection effect that I didn't include in my original analysis. As we will see, the result I derived applies if the time after is long enough, as in the SIA limit.

Let the amount of time before the killing be T1, and after (until all observers die), T2. So if there were no killing, P(after) = T2/(T2+T1). It is the ratio of the total measure of observer-moments after the killing divided by the total (after + before).

If the 1 red observer is killed (heads), then P(after|heads) = 99 T2 / (99 T2 + 100 T1)

If the 99 blue observers are killed (tails), then P(after|tails) = 1 T2 / (1 T2 + 100 T1)

P(after) = P(after|heads) P(heads) + P(after|tails) P(tails)

For example, if T1 = T2, we get P(after|heads) = 0.497, P(after|tails) = 0.0099, and P(after) = 0.497 (0.5) + 0.0099 (0.5) = 0.254

So here P(tails|after) = P(after|tails) P(tails) / P(after) = 0.0099 (.5) / (0.254) = 0.0195, or about 2%. So here we can be 98% confident to be blue observers if we are after the killing. Note, it is not 99%.

Now, in the relevant-to-SIA limit T2 >> T1, we get P(after|heads) ~ 1, P(after|tails) ~1, and P(after) ~1.

In this limit P(tails|after) = P(after|tails) P(tails) / P(after) ~ P(tails) = 0.5

So the SIA is false.

Comment author: Mallah 06 April 2010 07:33:25PM 1 point [-]

the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically

That is a justification for it, yes.

When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible.

Roko, on what do you base that statement? Non-actual observers do not participate in bets.

The SIA is not an example of anthropic reasoning; anthropic implies observers, not "non-actual observers".

See this post for an example of the difference, showing why the SIA is false.

Comment author: Mallah 31 March 2010 06:01:33PM 0 points [-]

Sounds cool. I'm from NYC, but no longer live there. I was a member of athiest clubs in college, but I'd bet that post-college (or any, really) rationalists have a hard time meeting others of similar views.

In response to Disambiguating Doom
Comment author: Mitchell_Porter 31 March 2010 01:58:55AM *  2 points [-]

I am very skeptical about SIA, but I've always respected the doomsday argument, and lately I wonder if Bill Joy luddism is the right response.

If there's a great filter ahead it is far more likely to be involved with the advanced technologies which are meant to make galactic civilization possible in the first place, rather than some unanticipated tripwire in the natural world. So if we interpret the doomsday argument as information about the danger of these advanced technologies - if we do this, we are overwhelmingly likely to die - then isn't the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them? Yes, if we don't go there we forego a future of cosmic expansion, but if such a future is overwhelmingly unlikely, then the rational thing to do may be precisely to stay within our own little bubble here in this solar system.

ETA: One other observation: Those hoping for a really long future lifespan may feel aggrieved by a civilizational strategy which seems to eschew the technologies you would need for radical life extension. In this regard I have noticed one thing. Suppose you had a civilization whose members stopped reproducing but which all lived for a million years. At the very beginning of those million years they might discover the doomsday argument and conclude that no-one would get to live so long. But if you are going to live for a million years, you first have to live for ten years, fifty years, a hundred years, and so on. So it is inevitable that such erroneous ideas would arise early. However, if you not only live for a million years, but plan on expanding into the universe and having lots of descendants who also live that long, then this argument is no longer valid, because the majority of observer-moments should still be in the distant future rather than back here on the planet of origin. Therefore, I see some hope that you can have very long lifespans without risking doom, if your society explicitly stops creating new observers. Thought I have to think that the technologies for radical life extension are intrinsically threatening anyway; it would require remarkable discipline to have rejuvenating biotechnology or a solid-state platform for consciousness, and not to develop dangerous forms of nanotechnology and artificial intelligence.

Comment author: Mallah 31 March 2010 03:59:27AM 0 points [-]

I am very skeptical about SIA

Righly so, since the SIA is false.

The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.

Comment author: Mallah 30 March 2010 05:15:31PM 0 points [-]

Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.

Comment author: JohannesDahlstrom 29 March 2010 10:10:24PM *  9 points [-]

The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?

The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.

Comment author: Mallah 30 March 2010 04:28:01PM 2 points [-]

That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.

For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.

OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.

As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.

Comment author: Mallah 30 March 2010 03:34:33AM -1 points [-]

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%.

Sure.

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated.

Wrong. Your epistemic situation is no longer the same after the announcement.

In a single-run (one-small-world) scenario, the coin has a 50% to come up tails or heads. (In a MWI or large universe with similar situations, it would come up both, which changes the results. The MWI predictions match yours but don't back the SIA). Here I assume the single-run case.

The prior for the coin result is 0.5 for heads, 0.5 for tails.

Before the killing, P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99. So far we agree.

P(red|before) = 0.5 (0.01) + 0.5 (0.01) = 0.01

Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.

P(red|after) = 0.5 (0) + 0.5 (1) = 0.5

So after the killing, you should expect either color door to be 50% likely.

This, of course, is exactly what the SIA denies. The SIA is obviously false.

So why does the result seem counterintuitive? Because in practice, and certainly when we evolved and were trained, single-shot situations didn't occur.

So let's look at the MWI case. Heads and tails both occur, but each with 50% of the original measure.

Before the killing, we again have P(heads) =P(tails) = 0.5

and P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99.

Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.

Huh? Didn't I say it was different? It sure is, because afterwards, we no longer have P(heads) = P(tails) = 0.5. On the contrary, most of the conscious measure (# of people) now resides behind the blue doors. We now have for the effective probabilities P(heads) = 0.99, P(tails) = 0.01.

P(red|after) = 0.99 (0) + 0.01 (1) = 0.01

In response to The I-Less Eye
Comment author: Mallah 30 March 2010 02:46:05AM *  1 point [-]

rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).

You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.

Basically, personal identity is not fundamental. For practical purposes, there are various kinds of effective probabilities. There is no actual randomness involved.

It is a mistake to work with 'probabilities' directly. Because the sum is always normalized to 1, 'probabilities' deal (in part) with global information, but people easily forget that and think of them as local. The proper quantity to use is measure, which is the amount of consciousness that each type of observer has, such that effective probability is proportional to measure (by summing over the branches and normalizing). It is important to remember that total measure need not be conserved as a function of time.

As for the bottom line: If there are 100 copies, they all have equal measure, and for all practical purposes have equal effective probability.

Comment author: wnoise 26 March 2010 07:05:43PM *  1 point [-]

That's just TMs, but there's no reason other types of math structures such as continuous functions shouldn't exist, and we don't even have the equivalent of a TM to put a measure distribution on them.

For continuous functions, we do. See "abstract stone duality".

Comment author: Mallah 30 March 2010 12:31:26AM *  1 point [-]

Interesting. Do you know of place on the net where I can see what other (independent, mathematically knowledgeable) people have to say about its implications? It's asking for a lot maybe, but I think that would be the most efficient way for me to gain info about it, if there is.

Comment author: Nisan 26 March 2010 06:23:31PM 7 points [-]

If it saved a copy of the universe at the beginning of your life and repeatedly ran the simulation from there until your death (if any), would it mean anything to say that you are experiencing your life multiple times?

Of course.

I'm not so sure, Mallah. Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A. I think your expectation depends entirely on your prior, and it I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.

(I'm assuming the simulation of universe A includes every Everett branch, or else it includes only a single Everett branch and it's the same one in every instance.)

What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? What if it only has one hard disk, but it writes each bit a thousand times, just to be safe? Does this count as a thousand copies of you?

As for wavefunction amplitudes, I don't see why that should have anything to do with the number of instantiations of a simulation.

Comment author: Mallah 30 March 2010 12:19:38AM *  1 point [-]

Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A.

That's right, Nisan (all else being equal, such as A and B having the same # of observers).

I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.

In the latter case, at least in a large enough universe (or quantum MWI, or the Everything), the prior probability of being a Boltzmann brain (not product of Darwinian evolution) would be nearly 1, since most distinct brain types are. We are not BBs (perhaps not prior info, but certainly info we have) so we must reject that method.

What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? ... Does this count as a thousand copies of you?

No. That is not a case of independent implementations, so it just has the measure of a single A.

As for wavefunction amplitudes, I don't see why that should have anything to do with the number of instantiations of a simulation.

A similar argument applies - more amplitude means more measure, or we would probably be BB's. Also, in the Turing machine version of the Tegmarkian everything, that could only be explained by more copies.

For an argument that even in the regular MWI, more amplitude means more implementations (copies), as well as discussion of what exactly counts as an implementation of a computation, see my paper

MCI of QM

View more: Prev | Next