I don't think I got visibly hurt or angry. In fact, when I did it, I was feeling more tempted than angry. I was in the middle of a conversation with another guy, and her rear appeared nearby, and I couldn't resist.
It made me seem like a jerk, which is bad, but not necessarily low status. Acting without apparent fear of the consequences, even stupidly, is often respected as long as you get away with it.
Another factor is that this was a 'high status' woman. I'm not sure but she might be related to a celebrity. (I didn't know that at the time.) Hence, any story linking me and her may be 'bad publicity' for me but there is the old saying 'there's no such thing as bad publicity'.
It was a single swat to the buttocks, done in full sight of everyone. There was other ass-spanking going on, between people who knew each other - done as a joke - so in context it was not so unusual. I would not have done it outside of that context, nor would I have done it if my inhibitions had not been lowered by alcohol; nor would I do it again even if they are.
Yes, she deserved it!
It was a mistake. Why? It exposed me to more risk than was worthwhile, and while I might have hoped that (aside from simple punishment) it would teach her the lesson tha...
Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.
[Note: this is going to sound at first like PUA advice, but is actually about general differences between the socially-typical and atypical in the sending and receiving of "status play" signals, using the current situation as an example.]
I don't know about "good", but for it to be "useful" you would've needed to do it first. (E.g. Her: "Buy me a drink" You: "Sure, now bend over." Her: "What?" ...
Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.
It sure was one hell of a low status signal. The worst possible way you can fail a shit test is to get visibly hurt and angry.
As for whether she deserved it, well, if you want to work in the kitchen, better be prepared to stand the heat. Expecting women you hit on to follow the same norms of behavior as your regular buddies and colleagues, and then getting angry when they don't, is like getting into a boxing match and then complaining you've been assaulted.
I still don’t understand how she “deserved” to have you escalate the encounter with a “hard” physical spanking; nor do I understand how, if you spanked her in a joking context, you would consider it punishment or “some measure of revenge.” From what you’ve said, it doesn’t seem like you were on sufficiently friendly terms with her that the spanking was in fact treated as teasing/joking action; you previously stated that she was not amused by the spanking, her brother threatened you, and you apologized.
I’m certainly not trying to say that her behavior wasn’t worthy of serious disapproval and verbal disparagement. But responding to her poor behavior with physical actions rather than words seems at least equally inappropriate.
I can confirm that this does happen at least sometimes (USA). I was at a bar, and I approached a woman who is probably considered attractive by many (skinny, bottle blonde) and started talking to her. She soon asked me to buy her a drink. Being not well versed in such matters, I agreed, and asked her what she wanted. She named an expensive wine, which I agreed to get her a glass of. She largely ignored me thereafter, and didn't even bother taking the drink!
(I did obtain some measure of revenge later that night by spanking her rear end hard, though I d...
In European bars or nightclubs, if (relatively) attractive girls ask strangers for drinks or dishes, then it typically means they are doing it professionally.
There is even a special phrase "consume girl" meaning that the girl's job is to lure clueless customers into buying expensive drinks for them for a cut of the profit. The surest sign of being a "consume girl" is that they typically don't consume what they ask for.
It's all about money, and has nothing to do with social games, whatsoever. They are not spoiled brats, but trained for this job.
I am not sure how common is this "profession" in the US, but in Europe it's relatively common.
I don’t like to go meta, but this comment and its upvotes (4 at the time I write) are among the more disturbing thing I’ve seen on this site. I have to assume that they reflect voters’ appreciation for a real-life story of a woman asking a man to buy a drink, rather than approval of the use of violence to express displeasure over someone else’s behavior and perceived morality in a social situation.
I’m also surprised that you’re telling this story without expressing any apparent remorse about your behavior, but I guess the upvotes show that you read the LW crowd better than I do.
But Stuart_Armstrong's description is asking us to condition on the camera showing 'you' surviving.
That condition imposes post-selection.
I guess it doesn't matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.
Wrong - it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).
...I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli t
It is only possible to fairly "test" beliefs when a related objective probability is agreed upon
That's wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.
Would you really guess "red", or do we agree?
Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I'd be a fool not to guess red.
But I wouldn't recommend it to others, because if there are more people, that ...
The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.
To fix that, we need to make 2 changes:
1) Let us give any reward/punishment to a third party we care about, e.g. SB.
2) The total reward/punishment she gets won't depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let's do it the first way).
The majority decides what guess to use, on the principle of one man, one vote. That is ...
If that were the case, the camera might show the person being killed; indeed, that is 50% likely.
Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.
Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.
Adding that condition is post-selection.
Note that "If you (being asked before the killing) will survive, what color is your door likely to be?" is very different from "Given that you did already survive, ...?". A member of the population to which the first of these applies might not survive. This changes the result. It's the difference between pre-selection and post-selection.
This subtly differs from Bostrom's description, which says 'When she awakes on Monday', rather than 'Monday or Tuesday.'
He makes clear though that she doesn't know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.
So on average ...
'On average' gives you the many-shot case, by definition.
In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). ...
Under a frequentist interpretation
In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.
...Reading Bostrom's explanation of the SB problem, and interpreting 'what should her credence be that the coin will fall heads?' as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1/2 however many times Sleeping Beauty's later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin's probability of
A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?
Presumably you heard the announcement.
This is post-selection, because pre-selection would have been "Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?"
...The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong's top-level post...?) describes a situation defined to have multiple possible outcomes, but there's only
I think talking about 'observers' might be muddling the issue here.
That's probably why you don't understand the result; it is an anthropic selection effect. See my reply to Academician above.
...We could talk instead about creatures that don't understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it's heads, we kill
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.
Huh? I haven't been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1/2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known - in this case, 0.01; I don't think anyone disputes that.
...P(R|KS) = P(R|K)·P(S|RK)/P(S
Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.
I just wanted to follow up on this remark I made. There is a suble an...
I omitted the "|before" for brevity, as is customary in Bayes' theorem.
That is not correct. The prior that is customary in using Bayes' theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.
For example, suppose we know that x=1,2,or 3. Our prior assigns 1/3 probability to each, so P(1) = 1/3. Then we find out "x is odd", so we update, getting P(1|odd) = 1/2. That is the standard use of Bayes' theorem, in which only our information changes.
OTOH, suppose...
Cupholder:
That is an excellent illustration ... of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations.
The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can't count them all as if they all exist.
I...
the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically
That is a justification for it, yes.
When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible.
Roko, on what do you base that statement? Non-actual observers do not participate in bets.
The SIA is not an example of anthropic reasoning; anthropic implies observers, not "non-actual observers".
See this post for an exa...
I am very skeptical about SIA
Righly so, since the SIA is false.
The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.
Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.
That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.
For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.
OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.
As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.
A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?
Here, the probability is certainly 99%.
Sure.
...B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later
rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).
You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.
Basically, personal identity is not fu...
Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A.
That's right, Nisan (all else being equal, such as A and B having the same # of observers).
I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.
In the latter case, at least in a large enough universe (or quan...
It's not a Newcomb problem. It's a problem of how much his promises mean.
Either he created a large enough cost to leaving if he is unhappy, in that he would have to break his promise, to justify his belief that he won't leave; or, he did not. If he did, he doesn't have the option to "take both" and get the utility from both because that would incur the cost. (Breaking his promise would have negative utility to him in and of itself.) It sounds like that's what ended up happening. If he did not, he doesn't have the option to propose sincerely, since he knows it's not true that he will surely not leave.
Ata, there are many things wrong with your ideas. (Hopefully saying that doesn't put you off - you want to become less wrong, I assume.)
it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself
I have indeed independently invented the "all math exists" idea myself, years ago. I used to believe it was almost certainly true. I have since downgraded its likelihood of being true to more like 50% as it has intractable problems.
...If it saved a copy of the univ
I agree that a claim of sound reasoning methodology is easy to fake, and the writer could easily be mistaken. So it's very weak evidence. However, it's not no evidence, because if the writer would have said "my belief in X is based on faith" that would probably decrease your trust in his conclusions compared to those of someone who didn't make any claims about their methods.
Academician, what you are explicitly not saying is that the aspects of reality that give rise to consciousness can be described mathematically. Well, parts of your post seem to imply that the mathematically describable functions are what matter, but other parts deny it. So it's confusing, rather than enlightening. But I'll take you at your word that you are not just a reductionist.
So you are a "monist" but, as David Chalmers has described such positions, in the spirit of dualism. As far as I am concerned, you are a dualist, because the only ...
Wei, the relationship between computing power and the probability rule is interesting, but doesn't do much to explain Born's rule.
In the context of a many worlds interpretation, which I have to assume you are using since you write of splitting, it is a mistake to work with probabilities directly. Because the sum is always normalized to 1, probabilities deal (in part) with global information about the multiverse, but people easily forget that and think of them as local. The proper quantity to use is measure, which is the amount of consciousness that each ...
Supposedly "we get the intuition that in a copying scenario, killing all but one of the copies simply shifts the route that my worldline of conscious experience takes from one copy to another"? That, of course, is a completely wrong intuition which I feel no attraction to whatsoever. Killing one does nothing to increase consciousness in the others.
See "Many-Worlds Interpretations Can Not Imply 'Quantum Immortality'"
Mitchell, you are on to an important point: Observers must be well-defined.
Worlds are not well-defined, and there is no definite number of worlds (given standard physics).
You may be interested in my proposed Many Computations Interpretation, in which observers are identified not with so-called 'worlds' but with implementations of computations: http://arxiv.org/abs/0709.0544
See my blog for further discussion: http://onqm.blogspot.com/