All of Mallah's Comments + Replies

Mitchell, you are on to an important point: Observers must be well-defined.

Worlds are not well-defined, and there is no definite number of worlds (given standard physics).

You may be interested in my proposed Many Computations Interpretation, in which observers are identified not with so-called 'worlds' but with implementations of computations: http://arxiv.org/abs/0709.0544

See my blog for further discussion: http://onqm.blogspot.com/

I wasn't sneaky about it.

I don't think I got visibly hurt or angry. In fact, when I did it, I was feeling more tempted than angry. I was in the middle of a conversation with another guy, and her rear appeared nearby, and I couldn't resist.

It made me seem like a jerk, which is bad, but not necessarily low status. Acting without apparent fear of the consequences, even stupidly, is often respected as long as you get away with it.

Another factor is that this was a 'high status' woman. I'm not sure but she might be related to a celebrity. (I didn't know that at the time.) Hence, any story linking me and her may be 'bad publicity' for me but there is the old saying 'there's no such thing as bad publicity'.

5pjeby
But you didn't get away with it. Also, technically, you acted like a creep, not a jerk. (A jerk acts boldly, a creep is sneaky and opportunistic.)
4Vladimir_M
That's true only if you manage to maintain the absolute no-apologies attitude. If you had to apologize about it, it's automatically a major fail. (Not trying to put you down, just giving you a realistic perspective.)

It was a single swat to the buttocks, done in full sight of everyone. There was other ass-spanking going on, between people who knew each other - done as a joke - so in context it was not so unusual. I would not have done it outside of that context, nor would I have done it if my inhibitions had not been lowered by alcohol; nor would I do it again even if they are.

Yes, she deserved it!

It was a mistake. Why? It exposed me to more risk than was worthwhile, and while I might have hoped that (aside from simple punishment) it would teach her the lesson tha... (read more)

4HughRistik
I think this situation falls pretty squarely into "two wrongs don't make a right" territory. The moral intuition is that a minor social infraction doesn't justify a violent response, even extremely minor violence. Even though you don't say so, perhaps that was a tacit reason for you to acknowledge it as a mistake. I do sympathize with your frustration at encountering such naked privilege and entitlement on her part, and that you would want some sort of recourse. It's possible that such brattiness would cause her trouble in her future relationships with men, but that isn't even necessarily true. You can't really get recourse for behavior like this; you just have to shut it down when it appears. I think you've learned that lesson.

Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.

[Note: this is going to sound at first like PUA advice, but is actually about general differences between the socially-typical and atypical in the sending and receiving of "status play" signals, using the current situation as an example.]

I don't know about "good", but for it to be "useful" you would've needed to do it first. (E.g. Her: "Buy me a drink" You: "Sure, now bend over." Her: "What?" ... (read more)

Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.

It sure was one hell of a low status signal. The worst possible way you can fail a shit test is to get visibly hurt and angry.

As for whether she deserved it, well, if you want to work in the kitchen, better be prepared to stand the heat. Expecting women you hit on to follow the same norms of behavior as your regular buddies and colleagues, and then getting angry when they don't, is like getting into a boxing match and then complaining you've been assaulted.

I still don’t understand how she “deserved” to have you escalate the encounter with a “hard” physical spanking; nor do I understand how, if you spanked her in a joking context, you would consider it punishment or “some measure of revenge.” From what you’ve said, it doesn’t seem like you were on sufficiently friendly terms with her that the spanking was in fact treated as teasing/joking action; you previously stated that she was not amused by the spanking, her brother threatened you, and you apologized.

I’m certainly not trying to say that her behavior wasn’t worthy of serious disapproval and verbal disparagement. But responding to her poor behavior with physical actions rather than words seems at least equally inappropriate.

1NancyLebovitz
Thanks for the explanation.

Women seem to have a strong urge to check out what shoes a man has on, and judge their quality. Even they can't explain it. Perhaps at some unconscious level, they are guarding against men who 'cheat' by wearing high heels.

I can confirm that this does happen at least sometimes (USA). I was at a bar, and I approached a woman who is probably considered attractive by many (skinny, bottle blonde) and started talking to her. She soon asked me to buy her a drink. Being not well versed in such matters, I agreed, and asked her what she wanted. She named an expensive wine, which I agreed to get her a glass of. She largely ignored me thereafter, and didn't even bother taking the drink!

(I did obtain some measure of revenge later that night by spanking her rear end hard, though I d... (read more)

In European bars or nightclubs, if (relatively) attractive girls ask strangers for drinks or dishes, then it typically means they are doing it professionally.

There is even a special phrase "consume girl" meaning that the girl's job is to lure clueless customers into buying expensive drinks for them for a cut of the profit. The surest sign of being a "consume girl" is that they typically don't consume what they ask for.

It's all about money, and has nothing to do with social games, whatsoever. They are not spoiled brats, but trained for this job.

I am not sure how common is this "profession" in the US, but in Europe it's relatively common.

I don’t like to go meta, but this comment and its upvotes (4 at the time I write) are among the more disturbing thing I’ve seen on this site. I have to assume that they reflect voters’ appreciation for a real-life story of a woman asking a man to buy a drink, rather than approval of the use of violence to express displeasure over someone else’s behavior and perceived morality in a social situation.

I’m also surprised that you’re telling this story without expressing any apparent remorse about your behavior, but I guess the upvotes show that you read the LW crowd better than I do.

5NancyLebovitz
You assaulted her because she asked for an expensive drink, you gave her the drink, and then she ignored you? You say you don't recommend what you did, but I'm curious about why, considering that you seem to think she deserved it.

But Stuart_Armstrong's description is asking us to condition on the camera showing 'you' surviving.

That condition imposes post-selection.

I guess it doesn't matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.

Wrong - it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).

I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli t

... (read more)
0cupholder
But not post-selection of the kind that influences the probability (at least, according to my own calculations). Which of my estimates is incorrect - the 50% estimate for what I call 'pre-selecting someone who happens to survive,' the 99% estimate for what I call 'post-selecting someone from the pool of survivors,' or both? Correct. p, strictly, isn't defined by the relative frequency - the strong law of large numbers simply justifies interpreting it as a relative frequency. That's a philosophical solution, though. It doesn't help for practical cases like the one you mention next... ...for practical scenarios like this we can instead use the central limit theorem to say that p's likely to be close to the relative frequency. I'd expect it to give the same results as Bayesian updating - it's just that the rationale differs. It certainly is in the sense that if 'you' die after 1 shot, 'you' might not live to take another!

It is only possible to fairly "test" beliefs when a related objective probability is agreed upon

That's wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.

Would you really guess "red", or do we agree?

Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I'd be a fool not to guess red.

But I wouldn't recommend it to others, because if there are more people, that ... (read more)

0Academian
So in my scenario, groups of people like you end up with 99 survivors being tortured or 1 not, with equal odds (despite that their actions are independent and non-competitive), and groups of people like me end up with 99 survivors not tortured or 1 survivor tortured, with equal odds. Let's say I'm not asserting that means I'm "right". But consider that your behavior may be more due to a ritual of cognition rather than systematized winning. You might respond that "rationalists win" is itself a ritual of cognition to be abandoned. More specifically, maybe you disagree that "whatever rationality is, it should fare well-in-total, on average, in non-competitive thought experiments". I'm not sure what to do about that response. In your scenario, I'd vote red, because when the (independent!) players do that, her expected payoff is higher. More precisely, if I model the others randomly, me voting red increases the probability that SB lands in world with a majority "red" vote, increasing her expectation. This may seem strange because I am playing by an Updateless strategy. Yes, in my scenario I act 99% sure that I'm in a blue room, and in yours I guess red, even though they have same assumptions regarding my location. Weird eh? What's happening here is that I'm planning ahead to do what wins, and planning isn't always intuitively consistent with updating. Check out The Absent Minded Driver for another example where planning typically outperforms naive updating. Here's another scenario, which involves interactive planning. To be honest with you, I'm not sure how the "surprise" emotion is supposed to work in scenarios like this. It might even be useless. That's why I base my actions on instrumental reasoning rather than rituals of cognition like "don't act surprised". By the way, you are certainly not the first to feel the weirdness of time inconsistency in optimal decisions. That's why there are so many posts working on decision theory here.

The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.

To fix that, we need to make 2 changes:

1) Let us give any reward/punishment to a third party we care about, e.g. SB.

2) The total reward/punishment she gets won't depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let's do it the first way).

The majority decides what guess to use, on the principle of one man, one vote. That is ... (read more)

0Academian
Is that a "yes" or a "no" for the scenario as I posed it? I agree. It is only possible to fairly "test" beliefs when a related objective probability is agreed upon, which for us is clearly a problem. So my question remains unanswered, to see if we disagree behaviorally: That's not my intention. To clarify, assume that: * the other prisoners' decisions are totally independent of yours (perhaps they are irrational), so that you can in no sense effect 99 real other people to guess blue and achieve a $99 payoff with only one beating, and * the payoffs/beatings are really to the prisoners, not someone else, Then, as I said, in that scenario I would guess that I'm in a blue room. Would you really guess "red", or do we agree? (My "reasons" for blue would be to note that I started out overwhelmingly (99%) likely to be in a blue room, and that my surviving the subsequent coin toss is evidence that it did not land tails and kill blue-roomed prisoners, or equivalently, that counterfactual-typically, people guessing red would result in a great deal of torture. But please forget why; I just want to know what you would do.)

If that were the case, the camera might show the person being killed; indeed, that is 50% likely.

Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.

Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.

0cupholder
Yep. But Stuart_Armstrong's description is asking us to condition on the camera showing 'you' surviving. It looks to me like we agree that pre-selecting someone who happens to survive gives a different result (99%) to post-selecting someone from the pool of survivors (50%) - we just disagree on which case SA had in mind. Really, I guess it doesn't matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case. I am unsure how to interpret this... ...but I'm fairly sure I disagree with this. If we do Bernoulli trials with success probability p (like coin flips, which are equivalent to Bernoulli trials with p = 0.5), I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the 'one-shot probability,' this justifies interpreting the relative frequency in the infinite limit as the 'one-shot probability.'

No, it shouldn't - that's the point. Why would you think it should?

Note that I am already taking observer-counting into account - among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches 1/3 in the many-shot case.

Adding that condition is post-selection.

Note that "If you (being asked before the killing) will survive, what color is your door likely to be?" is very different from "Given that you did already survive, ...?". A member of the population to which the first of these applies might not survive. This changes the result. It's the difference between pre-selection and post-selection.

0cupholder
I'll try to clarify what I'm thinking of as the relevant kind of selection in this exercise. It is true that the condition effectively picks out - that is, selects - the probability branches in which 'you' don't die, but I don't see that kind of selection as relevant here, because (by my calculations, if not your own) it has no impact on the probability of being behind a blue door. What sets your probability of being behind a blue door is the problem specifying that 'you' are the experimental subject concerned: that gives me the mental image of a film camera, representing my mind's eye, following 'you' from start to finish - 'you' are the specific person who has been selected. I don't visualize a camera following a survivor randomly selected post-killing. That is what leads me to think of the relevant selection as happening pre-killing (hence 'pre-selection').

This subtly differs from Bostrom's description, which says 'When she awakes on Monday', rather than 'Monday or Tuesday.'

He makes clear though that she doesn't know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.

So on average ...

'On average' gives you the many-shot case, by definition.

In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). ... (read more)

0cupholder
I think I essentially agree with this comment, which feels strange because I suspect we would continue to disagree on a number of the points we discussed upthread!

The 'selection' I have in mind is the selection, at the beginning of the scenario, of the person designated by 'you' and 'your' in the scenario's description.

If 'you' were selected at the beginning, then you might not have survived.

0cupholder
Yeah, but the description of the situation asserts that 'you' happened to survive.

There are always 2 coin flips, and the results are not known to SB. I can't guess what you mean, but I think you need to reread Bostrom's paper.

0JGWeissman
It seems I was solving an equivalent problem. In the formulation you are using, the weighted average should reflect the number of wakeups. What this results means is that SB should expect with probabilty 1/3, that if she were shown the results of the coin toss, she would observe that the result was heads.

Under a frequentist interpretation

In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.

Reading Bostrom's explanation of the SB problem, and interpreting 'what should her credence be that the coin will fall heads?' as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1/2 however many times Sleeping Beauty's later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin's probability of

... (read more)
1JGWeissman
This should be a weighted average, reflecting how many coin flips are observed in the four cases: P(heads) = (2*1 + 3*1/3 + 3*1/3 + 4*0)/(2+3+3+4) = (2+1+1+0)/12 = 4/12 = 1/3
0cupholder
Maybe I misunderstand what the frequentist interpretation involves, but I don't think the 2nd sentence implies the 1st. If I remember rightly, a frequentist interpretation of probability as long-run frequency in the case of Bernoulli trials (e.g. coin flips) can be justified with the strong law of large numbers. So one can do that mathematically without actually flipping a coin arbitrarily many times, from a definition of a single Bernoulli trial. My initial interpretation of the question seems to differ from the intended one, if that's what you mean. This subtly differs from Bostrom's description, which says 'When she awakes on Monday', rather than 'Monday or Tuesday.' I think your description probably better expresses what Bostrom is getting at, based on a quick skim of the rest of Bostrom's paper, and also because your more complex description makes both of the answers Bostrom mentions (1/2 and 1/3) defensible: depending on how I interpret you, I can extract either answer from the one-shot case, because the interpretation affects how I set up the relative frequency. If I count how many times on average the coin comes up heads per time it is flipped, I must get the answer 1/2, because the coin is fair. If I count how many times on average the coin comes up heads per time SB awakes, the answer is 1/3. Each time I redo the 'experiment,' SB has a 50% chance of waking up twice with the coin tails, and a 50% chance of waking up once with the coin heads. So on average she wakes up 0.5×2 + 0.5×1 = 1.5 times, and 0.5×1 = 0.5 of those 1.5 times correspond to heads: hence 0.5/1.5 = 1/3. I'm guessing that the Bayesian analog of these two possible thought processes would be something like * SB asking herself, 'if I were the coin, what would I think my chance of coming up heads was whenever I'm awake?' * SB asking herself, 'from my point of view, what is the coin about to be/was the coin yesterday whenever I wake up?' but I may be wrong. At any rate, I haven't thought

A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

Presumably you heard the announcement.

This is post-selection, because pre-selection would have been "Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?"

The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong's top-level post...?) describes a situation defined to have multiple possible outcomes, but there's only

... (read more)
0cupholder
The 'selection' I have in mind is the selection, at the beginning of the scenario, of the person designated by 'you' and 'your' in the scenario's description. The announcement, as I understand it, doesn't alter the selection in the sense that I think of it, nor does it generate a new selection: it just indicates that 'you' happened to survive. I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that they don't know what the millionth digit of pi is, and so they can treat it as a random bit?

I think talking about 'observers' might be muddling the issue here.

That's probably why you don't understand the result; it is an anthropic selection effect. See my reply to Academician above.

We could talk instead about creatures that don't understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it's heads, we kill

... (read more)
0cupholder
Okay. I believe that situations A and B which you quote from Stuart_Armstrong's post involve pre-selection, not post-selection, so maybe that is why we disagree. I believe that because the descriptions of the two situations refer to 'you' - that is, me - which makes me construct a mental model of me being put into one of the 100 rooms at random. In that model my pre-selected consciousness is at issue, not that of a post-selected survivor. By 'math problem' do you mean the question of whether pi's millionth bit is 0? If so, I disagree. The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong's top-level post...?) describes a situation defined to have multiple possible outcomes, but there's only one outcome to the question 'what is pi's millionth bit?'

Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.

Huh? I haven't been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1/2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known - in this case, 0.01; I don't think anyone disputes that.

P(R|KS) = P(R|K)·P(S|RK)/P(S

... (read more)
2Academian
Let me instead ask a simple question: would you actually bet like you're in a red room? Suppose you were told the killing had happened (as in the right column of Cupholder's diagram, and required to guess the color of your room, with the following payoffs: * Guess red correctly -> you earn $1.50 * Guess blue correctly -> you earn $1.00 * Guess incorrectly -> you are terribly beaten. Would you guess red? Knowing that under independent repeated or parallel instances of this scenario (although merely hypothetical if you are concerned with the "number of shots"), * "guess heads" mentality typically leads to large numbers of people (99%) being terribly beaten * "guess blue" mentality leads to large numbers of people (99%) earning $1 and not being beaten * this not an interactive scenario like the Prisoner's dilemma, which is interactive in a way that renders a sharp distinction between group rationality and individual rationality, would you still guess "red"? Not me. I would take my survival as evidence that blue rooms were not killed, and guess blue. If you would guess "blue" for "other reasons", then we would exhibit the same behavior, and I have nothing more to discuss. At least in this case, our semantically different ways of managing possibilities are resulting in the same decision, which is what I consider important. You may disagree about this importance, but I apologize that I'm not up for another comment thread of this length. If you would really guess "red", then I have little more to say than to reconsider your actions, and to again excuse me from this lengthy discussion.
0cupholder
Under a frequentist interpretation it is not possible for the equation to work pre-killing and yet not work post-killing: if one's estimate of P(R|KS) = 0.01 is correct, that implies one has correctly estimated the relative frequency of having been red-doored given that one survives the killing. That estimate of the relative frequency cannot then change after the killing, because that is precisely the situation for which the relative frequency was declared correct! I don't agree, because in my judgment the greater number of people initially behind blue doors skews the probability in favor of 'you' being behind a blue door. Reading Bostrom's explanation of the SB problem, and interpreting 'what should her credence be that the coin will fall heads?' as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1/2 however many times Sleeping Beauty's later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin's probability of coming up heads is 1/2.

Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.

I just wanted to follow up on this remark I made. There is a suble an... (read more)

I omitted the "|before" for brevity, as is customary in Bayes' theorem.

That is not correct. The prior that is customary in using Bayes' theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.

For example, suppose we know that x=1,2,or 3. Our prior assigns 1/3 probability to each, so P(1) = 1/3. Then we find out "x is odd", so we update, getting P(1|odd) = 1/2. That is the standard use of Bayes' theorem, in which only our information changes.

OTOH, suppose... (read more)

0Academian
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong. I am using an interpretation that works -- that is, maximizes the total utility of equivalent possible observers -- given objectively-equally-likely hypothetical worlds (otherwise it is indeed problematic). That's correct, and not an issue. In case it appears an issue, the beliefs in the update yielding P(R)=0.01 can be restated non-indexically (with no reference to "you" or "now" or "before"): R = "person X is/was/will be in a red room" K = "at some time, everyone in a red/blue room is killed according as a coin lands heads/tails S = "person X survives/survived/will survive said killing" Anthropic reasoning just says "reason as if you are X", and you get the right answer: 1) P(R|KS) = P(R|K)·P(S|RK)/P(S|K) = 0.01·(0.5)/(0.5) = 0.01 If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so. In your analysis, be sure you only use SIA once to declare equal likelihood of possible-observers, (it's easiest at the beginning), and be explicit when you use it. Then use evidence to constrain which of those equally-likely folk you might actually be, and you'll find that 1% of them are in red rooms, so SIA gives the right answer in this problem. Cupholder's diagram, ignoring its frequentist interpretation if you like, is a good aid to count these equally-likely folk. SIA doesn't ask you to count observers in the "actual world". It applies to objectively-equally-likely hypothetical worlds: http://en.wikipedia.org/wiki/Self-Indication_Assumption "SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist." Quantitatively, to work properly it say

Cupholder:

That is an excellent illustration ... of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations.

The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can't count them all as if they all exist.

I... (read more)

1JGWeissman
Cupholder managed to find an analogous problem in which the Bayesian subjective probabilities mapped to the same values as frequentist probabilities, so that the frequentist approach really gives the same answer. Yes, it would be nice to just accept subjective probabilities so you don't have to do that, but the answer Cupholder gave is correct. The analysis you label "Bayesian", on the other hand, is incorrect. After you notice that you have survived the killing you should update your probability that coin showed tails to p(tails|survival) = p(tails) * p(survival|tails) / p(survival) = .5 * .01 / .5 = .01 so you can then calculate "P(red|after)" = p(heads|survival) * "p(red|heads)" + p(tails|survival) * "p(red|tails)" = .99 * 0 + .01 * 1 = .01 Or, as Academian suggested, you could have just updated to directly find p(red|survival) = p(red) * p(survival|red) / p(survival)
0cupholder
I disagree, but I am inclined to disagree by default: one of the themes that motivates me to post here is the idea that frequentist calculations are typically able to give precisely the same answer as Bayesian calculations. I also see no trouble with wearing my frequentist hat when thinking about single coin flips: I can still reason that if I flipped a fair coin arbitrarily many times, the relative frequency of a head converges almost surely to one half, and that relative frequency represents my chance of getting a head on a single flip. I believe that the observers who survive would. To clarify my thinking on this, I considered doing this experiment with a trillion doors, where one of the doors is again red, and all of the others blue. Let's say I survive this huge version of the experiment. As a survivor, I know I was almost certainly behind a blue door to start with. Hence a tail would have implied my death with near certainty. Yet I'm not dead, so it is extremely unlikely that I got tails. That means I almost certainly got heads. I have gained information about the coin flip. I think talking about 'observers' might be muddling the issue here. We could talk instead about creatures that don't understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it's heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads. I don't think of the pi digit process as equivalent. Say I interpret 'pi's millionth bit is 0' as heads, and 'pi's millionth bit is 1' as tails. If I repeat the door experimen
0wnoise
FWIW, it's not that hard to calculate binary digits of pi: http://oldweb.cecm.sfu.ca/projects/pihex/index.html I think I'll go calculate the millionth, and get back to you. EDIT: also turns out to be 0.
0Academian
I omitted the "|before" for brevity, as is customary in Bayes' theorem. Cupholder's excellent diagram should help make the situation clear. Here is a written explanation to accompany: R = "you are in a red room" K = "at some time, everyone in a red/blue room is killed according as a coin lands heads/tails" H = "the killing has happened" A = "you are alive" P(R) means your subjective probability that you are in a red room, before knowing K or H. Once you know all three, by Bayes' theorem: P(R|KHA) = P(R)·P(KHA|R)/P(KHA) = 0.01·(0.5)/(0.5) = 0.01 I'd denote that by P(R|KA) -- with no information about H -- and you can check that it indeed equals 0.01. Again, Cupholder's diagram is an easy way to see this intuitively. If you want a verbal/mathematical explanation, first note from the diagram that the probability of being alive in a red room before killings happen is also 0.01: P(R|K~HA) = #(possible living observers in red rooms before killings)/#(possible living observers before killings) = 0.01 So we have P(R|KHA)=P(R|K~HA)=0.01, and therefore by the usual independence trick, P(R|KA) = P(RH|KA) + P(R~H|KA) = P(H|KA)·P(R|KHA) + P(~H|KA)·P(R|K~HA) = [P(H|KA)+P(~H|KA)]·0.01 = 0.01 So even when you know about a killing, but not whether it has happened, you still believe you are in a red room with probability 0.01.
0cupholder
Saw this come up in Recent Comments, taking the opportunity to simultaneously test the image markup and confirm Academian's Bayesian answer using boring old frequentist probability. Hope this isn't too wide... (Edit: yup, too wide. Here's a smaller-albeit-busier-looking version.)

the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically

That is a justification for it, yes.

When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible.

Roko, on what do you base that statement? Non-actual observers do not participate in bets.

The SIA is not an example of anthropic reasoning; anthropic implies observers, not "non-actual observers".

See this post for an exa... (read more)

Sounds cool. I'm from NYC, but no longer live there. I was a member of athiest clubs in college, but I'd bet that post-college (or any, really) rationalists have a hard time meeting others of similar views.

I am very skeptical about SIA

Righly so, since the SIA is false.

The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.

Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.

-1bogus
The complexity cost of a model in which any brain is conscious is enormous. Keep in mind that a model with consciousness has to 'output' qualia, concepts, thoughts... which (as far as we can tell) correspond to complex brain patterns which are physically unique to each single brain. That is, unless the physical implementation of subjective experience is much simpler than we think it is.

That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.

For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.

OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.

As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%.

Sure.

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later

... (read more)
0Academian
No; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors: P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01 So SIA + Bayesian updating happens to agree with the "quantum measure" heuristic in this case. However, I am with Nick Bodstrom in rejecting SIA in favor of his "Observation Equation" derived from "SSSA", precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates).

rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).

You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.

Basically, personal identity is not fu... (read more)

Interesting. Do you know of place on the net where I can see what other (independent, mathematically knowledgeable) people have to say about its implications? It's asking for a lot maybe, but I think that would be the most efficient way for me to gain info about it, if there is.

Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A.

That's right, Nisan (all else being equal, such as A and B having the same # of observers).

I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.

In the latter case, at least in a large enough universe (or quan... (read more)

It's not a Newcomb problem. It's a problem of how much his promises mean.

Either he created a large enough cost to leaving if he is unhappy, in that he would have to break his promise, to justify his belief that he won't leave; or, he did not. If he did, he doesn't have the option to "take both" and get the utility from both because that would incur the cost. (Breaking his promise would have negative utility to him in and of itself.) It sounds like that's what ended up happening. If he did not, he doesn't have the option to propose sincerely, since he knows it's not true that he will surely not leave.

1Academian
Creating internal deterrents is a kind of self modification, and you're right that it's a way of systematically removing or altering one's options.

Ata, there are many things wrong with your ideas. (Hopefully saying that doesn't put you off - you want to become less wrong, I assume.)

it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself

I have indeed independently invented the "all math exists" idea myself, years ago. I used to believe it was almost certainly true. I have since downgraded its likelihood of being true to more like 50% as it has intractable problems.

If it saved a copy of the univ

... (read more)
2solipsist
The choice of your turing machine doesn't much matter, since all turing machines can simulate each other. If you choose the "wrong" turing machine, your measures will be off by at most a constant factor (the complexity penalty of an interpreter for the "right" machine language).
1wnoise
For continuous functions, we do. See "abstract stone duality".
8Nisan
I'm not so sure, Mallah. Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A. I think your expectation depends entirely on your prior, and it I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations. (I'm assuming the simulation of universe A includes every Everett branch, or else it includes only a single Everett branch and it's the same one in every instance.) What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? What if it only has one hard disk, but it writes each bit a thousand times, just to be safe? Does this count as a thousand copies of you? As for wavefunction amplitudes, I don't see why that should have anything to do with the number of instantiations of a simulation.

I agree that a claim of sound reasoning methodology is easy to fake, and the writer could easily be mistaken. So it's very weak evidence. However, it's not no evidence, because if the writer would have said "my belief in X is based on faith" that would probably decrease your trust in his conclusions compared to those of someone who didn't make any claims about their methods.

Academician, what you are explicitly not saying is that the aspects of reality that give rise to consciousness can be described mathematically. Well, parts of your post seem to imply that the mathematically describable functions are what matter, but other parts deny it. So it's confusing, rather than enlightening. But I'll take you at your word that you are not just a reductionist.

So you are a "monist" but, as David Chalmers has described such positions, in the spirit of dualism. As far as I am concerned, you are a dualist, because the only ... (read more)

1Academian
Correct. I just wrote a follow up to acknowledge this. In short, I can only defend so much at one time :)

Wei, the relationship between computing power and the probability rule is interesting, but doesn't do much to explain Born's rule.

In the context of a many worlds interpretation, which I have to assume you are using since you write of splitting, it is a mistake to work with probabilities directly. Because the sum is always normalized to 1, probabilities deal (in part) with global information about the multiverse, but people easily forget that and think of them as local. The proper quantity to use is measure, which is the amount of consciousness that each ... (read more)

"our intuition of identical copy immortality"

Speak for yourself - I have no such intuition.

1Wei Dai
I don't claim that everyone has that intuition, which is why I said "I guess that most people would do so..." It seems that most people in these comments, at least, do prefer A.

Supposedly "we get the intuition that in a copying scenario, killing all but one of the copies simply shifts the route that my worldline of conscious experience takes from one copy to another"? That, of course, is a completely wrong intuition which I feel no attraction to whatsoever. Killing one does nothing to increase consciousness in the others.

See "Many-Worlds Interpretations Can Not Imply 'Quantum Immortality'"

http://arxiv.org/abs/0902.0187