All of neq1's Comments + Replies

If you are not going to do an actual data analysis, then I don't think there is much point of thinking about Bayes' rule. You could just reason as follows: "here are my prior beliefs. ooh, here is some new information. i will now adjust my believes, by trying to weigh the old and new data based on how reliable and generalizable i think the information is." If you want to call epistemology that involves attaching probabilities to beliefs, and updating those probabilities when new information is available, 'bayesian' that's fine. But, unless you h... (read more)

1Tyrrell_McAllister
Yes, the importance of thinking in terms of distributions instead of individual probabilities is another valuable lesson of "pop" Bayesianism.

I feel like this creates more misconceptions than it clears up. It's very dismissive of something that is really in the early phases of being studied.

The primary effect that reading this had on me was the change in state from [owning a cloak hadn't occurred to me] to [owning a cloak sounds awesome; i am unhappy that i hadn't thought of it on my own]

3[anonymous]
Heh. For me it was mainly "Which Etsy supplier was that?" I've been wanting a good cloak. Although the bits about "just having a thing that you could get the benefit of may not help if the previous lack of it meant the necessary motivation to use it was never really instilled as a pattern" actually helped. Going to have to plan to make more muffins.

The answer to the question "what proportion of phenotypic variability is due to genetic variability?" always has the same answer: "it depends!" What population of environments are you doing this calculation over? A trait can go from close to 0% heritable to close to 100% heritable, depending on the range of environments in the sample. That's a definition problem. Further, what should we count as 'genetic'? Gene expression can depend on the environment of the parents, for example (DNA methylation, etc). That's an environmental... (read more)

7jsalvatier
The definition of 'heritable' being underspecified (since you have to specify what population of environments you're considering) is not the same as being incoherent.

Adoption studies are biased toward the null of no parenting effect, because adoptive parents aren't randomly selected from the population of potential parents (they often are screened to be similar to biological parents).

Twin studies I think are particularly flawed when it comes to estimating heritability (a term that has an incoherent definition). Twins have a shared pre-natal environment. In some cases, they even share a placenta.

Plus, the whole gene vs. environment discussion is obsolete, in light of the findings of the past decade. Everything is gene-environment interaction.

0JoshuaZ
So most of what you have written makes sense but there are some major issues with some parts. Can you expand on what you think about the definition is incoherent? This is a pretty standard term. The fact that many genes interact in a complicated way with the environment is not newly discovered. It doesn't change the fact that in some contexts genes or environment can matter more or less. For example, if one has a gene that codes for some form of mental retardation, in most cases, environment can't change that. (I say in most cases because there are a few exceptions especially related to issues related to trace nutrients or to bad reactions to specific compounds). Similarly, if someone has severe lead poisoning they are going to have pretty bad problems regardless of what the genes the person has. The first two points you made while roughly valid connect to a more general issue- yes these studies have flaws, but just because a technique has flaws doesn't mean we can't use it to learn (especially when in this context the issues you bring up are well known to the researchers).

wait, this isn't well done satire?

Sometimes, you can learn a few things about effective rotisserie from an unexpected place, even if you don't plan on serving Irish babies.

I don't think the questions even make much sense. We don't live in the world that we once thought we did, where genotype to phenotype results from DNA->RNA->protein model. The real action is in the switches, which are affected by the environment (and so on).

I'm not opposed to ever using terms like "realist." I'm opposed to it as it was used in the main post, where people who agree my views are realists, and people who do not are denialists.

7dlthomas
I didn't interpret the original post that way. "X realist" on this site doesn't typically mean "person whose views about X are realistic" but rather "person who believes X is a real thing." In this case, a "race realist" would be someone who believes that there are real, significant differences between races, presumably on a genetic basis. A race anti-realist would be someone who does not believe that. Both of these are categories of positions, into which a variety of different particular viewpoints might fall.
3Emile
I would expect the implicit opposite of "race realist" to be "race idealist"; i.e. the opposition is roughly between focusing on things as they are, vs. things as they should be.

It implies that people who reject their claims are not being real. I want to be a realist, but I certainly have seen no evidence that any particular race is more likely to commit unscrupulous acts if you control for environment (if that was even possible). It's a propaganda term, like '[my cause] realist.'

2GLaDOS
-23Aurini
4Jayson_Virissimo
If "realism" is just an applause light, then why do people (including me) refer to themselves (non-ironically) as anti-realists (like moral anti-realists or scientific anti-realists)?
6Multiheaded
This might or might not be so, however, if you suddenly saw strong evidence to the contrary, would you hold the genetically afflicted race in disgust and contempt, treating it as having less moral worth than the more fortunate races? Or would you try to help its members eliminate the unwanted cultural/behavioral differences (without necessarily harming yourself in any way)?
3David Althaus
What do you think of the term 'moral realism' ? Edit: Damn it; I always forget to read all comments before writing ones myself.
-25Aurini

Would you please elaborate?

Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler," and thus the doctor doesn't kill the traveler.

TDT could deduce that people would deduce that TDT would not endorse the action, ... (read more)

8APMason
I don't think that's right. A TDT agent wants people to deduce that TDT would not endorse the action, and therefore TDT would not endorse the action. If it did, it would be the equivalent of defecting in the Prisoner's Dilemma - the other guy would simulate you defecting even if he cooperated, and therefore defect himself, and you end up choosing a sub-optimal option. You can't say "the other guy's going to cooperate so I'll defect" - the other guy's only going to cooperate if he thinks you are (and he thinks you wouldn't if he defects), and if your decision theory is open to the consideration "the other guy's going to cooperate so I'll defect", the other won't think you'll cooperate if he does, and will therefore defect. You can't assume that you've thought it all through one more time than the other guy.

Genes just aren't as much of the story as we thought they were. Whether or not a gene increases fitness might depend on whether it is methylated or not, for example. Until recently, we didn't realize that there could be transgenerational transmittance of DNA methylation patterns due to environmental factors.

And as it turns out, all these predictions are correct.

2AlephNeil
I understand 'the selfish gene theory' to be the idea that we should expect to see genes whose 'effects' are such as to cause their own replication to be maximized, as opposed to promoting the survival/reproduction of the individual, group or species, whenever these goals differ. This is almost a tautology, modulo the tricky business of defining the 'effects' of a particular gene. I don't see how the existence of epigenetic inheritance has anything to do with it, especially as the selfish gene theory doesn't depend on genes being made of DNA, only that whatever they are, genes can preserve information indefinitely.
3lukeprog
Overconfident? Really? * "...looks like a good candidate for an evolved intuition" * "Many researchers think..." * "Many researchers suggest..." * "Our brains may have evolved..." * "but we may not have evolved..." * "...it seems unlikely that..." And of course I haven't defended selfish gene theory.

"The Bridge". There was one person who survived and said he changed his mind once he was airborne. My recollection of the movie is that most of the people who jumped had been wanting to die for most of their lives. Even their family members seemed at peace with it for that reason.

The first one is flawed, IMO, but not for the reason you gave (and I wouldn't call it a 'trick'). The study design is flawed. They should not ask everyone "which is more probable?" People might just assume that the first choice, "Linda is a bank teller" really means "Linda is a bank teller and not active in the feminist movement" (otherwise the second answer would be a subset of the first, which would be highly unusual for a multiple choice survey).

The Soviet Union study has a better design, where people are randomized and only see one option and are asked how probable it is.

7HughRistik
Yes. The bank teller example is probably flawed for that reason. When real people talk to each other, they obey the cooperative principle of Grice or flout it in obvious ways. A cooperative speak would ask whether Linda was a bank teller who was active in the feminist movement, or if Linda was bank teller regardless of whether she was active in the feminist movement (showing that one answer included the other). Eliezer addresses this possibility in Conjunction Controversy: Yet Eliezer is strangely tossing out a lot of information. We don't just know that she is a bank teller and anti-nuclear, we also know: "She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice." At least where I went to college, the conditional probability that someone was a feminist given that they were against nuclear power, majored in philosophy, and were concerned with discrimination and social justice was probably pretty damn high.

You have to realize that a great number of things are discussed in these proceedings that the mind just can't deal with, people are simply too tired and distracted, and by way of compensation they resort to superstition.

-- Kafka, The Trial

Justice is an artefact of custom. Where customs are unsettled its dictates soon become dated. Ideas of justice are as timeless as fashions in hats.

-John Gray, Straw Dogs

4jfm
~ Epicurus, Principal Doctrines
4Jayson_Virissimo
-Anthony de Jasay, Inspecting the Foundations of Liberalism Conventions against torts like murder and theft are older than civilization. I think it is a safe bet they will still be around in a thousand years.

Who has not experienced the chilling memory of the better things? How it creeps over the spirit of one's current dreams! Like the specter at the banquet it stands, its substanceless eyes viewing with a sad philosophy the make-shift feast.

-Theodore Dreiser, The Titan

If you look at Table 2 in the paper, it shows doses of each vitamin for every study that is considered low risk for bias. I count 9 studies that have vitamin A <10,000 IU and vitamin E <300 IU, which is what PhilGoetz said are good dosage levels.

The point estimates from those 9 studies (see figure 2) are: 2.88, 0.18, 3.3, 2.11, 1.05, 1.02, 0.78, 0.87, 1.99. (1 favors control)

Based on this quick look at the studies, I don't see any reason to believe that a "hockey stick" model will show a benefit of supplements at lower dose levels.

0wedrifid
The titular contention used the word 'kill'. That's what hockey sticks tend to do.

"And I don't expect I will ever have to do that."

You do not sound 100% certain.

-1Perplexed
Indirect response: Perhaps you should discuss my level of confidence with Tim Tyler. When you two reach consensus regarding my level of confidence, then come back and challenge me about it. Direct response: Do you have some point in making your observation?

It would be nice if the top scoring all-time posts really reflected their impact. Right now there is some bias towards newer posts. Plus, Eliezer's sequences appeared at OB first, which greatly reduced LW upvotes.

Possible solution: every time a post is linked to from a new post, it gets an automatic upvote (perhaps we don't count it if linked to by same author). I don't know if it's technically feasible

2Document
And if linked to by someone who's already upvoted it.

That would be great. I'd love to see the results.

2datadataeverywhere
I suppose what I was getting at was asking whether it is something that you have enough interest in or ideas about that you would like to collaborate on.

In the first example, you couldn't play unless you had at least 100M dollars of assets. Why would someone with that much money risk 100M to win a measly 100K, when the expected payoff is so bad?

3Will_Newsome
Yeah, uhm, I figured I'd misunderstood that, because my second hypothesis was that someone was trolling us. Looking at the poster's previous comments I'm more inclined to think that he just missed the whole 'Bayes is god' meme.

In cases where a scientist is using a software package that they are uncomfortable with, I think output basically serves as the only error checking. First, they copy some sample code and try to adapt it to their data (while not really understanding what the program does). Then, they run the software. If the results are about what they expected, they think "well, we most have done it right." If the results are different than they expected, they might try a few more times and eventually get someone involved who knows what they are doing.

Error finding: I strongly suspect that people are better at finding errors if they know there is an error.

For example, suppose we did an experiment where we randomized computer programmers into two groups. Both groups are given computer code and asked to try and find a mistake. The first group is told that there is definitely one coding error. The second group is told that there might be an error, but there also might not be one. My guess is that, even if you give both groups the same amount of time to look, group 1 would have a higher error identification success rate.

Does anyone here know of a reference to a study that has looked at that issue? Is there a name for it?

Thanks

5datadataeverywhere
I know of no such study, and have failed to find one in a quick literature search. I occasionally run behavioral psych studies, and this seems like a good candidate. How would you feel about me adapting this into a study?
6CronoDAS
Prediction: Group 1 would also have a higher false-positive rate.

Yes, that's a good point. Tthat would be considered using a data augmentation prior (Sander Greenland has advocated such an approach).

only if you keep specifying hyper-priors, which there is no reason to do

0Oscar_Cunningham
Exactly. There's no point in the first meta-prior either.

In the second example the person was speaking informally, but there is nothing wrong with specifying a probability distribution for an unknown parameter (and that parameter could be a probability for heads)

Hm, good point. Since the usual thing is .5, the claim should be the alternative. I was thinking in terms of trying to reject their claim (which it wouldn't take much data to do), but I do think my setup was non-standard. I'll fix it later today

Very good examples of perceptions driving self-selection.

It might be useful to discuss direct and indirect effects.

Suppose we want to compare fatality rates if everyone drove a Volvo versus if no one did. If the fatality rate was lower in the former scenario than in the latter, that would indicate that Volvo's (causally) decrease fatality rates.

It's possible that it is entirely through an indirect effect. For example, the decrease in the fatality rate might entirely be due to behavior changes (maybe when you get in a Volvo you think 'safety' and dri... (read more)

3IlyaShpitser
Short nitpick -- lots of assumptions other than ignorability can work for identifying direct effects (there is more to life than covariate adjustment). In particular, if we can agree on the causal diagram, then all sorts of crazy identification can become possible.

In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)

1[anonymous]
That I get bad karma here is completely biased in my opinion. People just don't realize that I'm basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence. It's a mock of all that is wrong with this community. I already thought I'd get bad karma for my other post but was surprised not to. I'll probably get really bad karma now that I say this. Oh well :-) To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It's nothing more than that. People suspect that I'm making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I'm not the SIAI. I can argue for things I don't support and not even think are sound.

How about spreading rationality?

This site, I suspect, mostly attracts high IQ analytical types who would have significantly higher levels of rationality than most people, even if they had never stumbled upon LessWrong.

It would be great if the community could come up with a plan (and implement it) to reach a wider audience. When I've sent LW/OB links to people who don't seem to think much about these topics, they often react with one of several criticisms: the post was too hard to read (written at too high of a level); the author was too arrogant (wh... (read more)

8DSimon
I think one possible strategy is to get people to start being rational about being in favor of things they already support (or being against things that they already disagree with). For example, if someone is anti-alt-med, but for political reasons rather than evidence-based reasons, get them to start listening to The Skeptic's Guide to the Universe or something similar. Once they see that rationality can bolster things they already support, they may be more likely to see it as trustworthy, and a valid motivation to "update" when it later conflicts with some of their other beliefs.

But: "You can be a virtue ethicist whose virtue is to do the consequentialist thing to do"

0taw
You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc. All this is situational, and it would be good to push everyone into more consequentialism in contexts where it matters most - like charity and public policy. It matters less if people are consequentialist when dealing with their pets or deciding how to redecorate their houses, so there's less point focusing on those. And there's zero evidence that spill between different areas where you can be "consequentialist" would be even large enough to bother, let alone basing ethics on that.

Perhaps a better title would be "Bayes' Theorem Illustrated (My Ways)"

In the first example you use shapes with colors of various sizes to illustrate the ideas visually. In the second example, you using plain rectangles of approximately the same size. If I was a visual learner, I don't know if your post would help me much.

I think you're on the right track in example one. You might want to use shapes that are easier to estimate the relative areas. It's hard to tell if one triangle is twice as big as another (as measured by area), but it's easie... (read more)

It seems to me that the standard solutions don't account for the fact that there are a non-trivial number of families who are more likely to have a 3rd child, if the first two children are of the same sex. Some people have a sex-dependent stopping rule.

P(first two children different sexes | you have exactly two children) > P(first two children different sexes | you have more than two children)

The other issue with this kind of problem is the ambiguity. What was the disclosure algorithm? How did you decide which child to give me information about? Without that knowledge, we are left to speculate.

4JenniferRM
This issue is also sometimes raised in cultures where male children are much more highly prized by parents. Most people falsely assume that such a bias, as it stands, changes gender ratios for the society, but its only real effect is that correspondingly larger and rarer families have lots of girls. Such societies typically do have weird gender ratios, but this is mostly due to higher death rates before birth because of selective abortion, or after birth because some parents in such societies feed girls less, teach them less, work them more, and take them to the doctor less. Suppose the rules for deciding to have a child without selective abortion (and so with basically 50/50 odds of either gender) and no unfairness post-birth were: If you have a boy, stop; if you have no boy but have fewer than N children, have another. In a scenario where N > 2, two child families are either a girl and a boy, or two girls during a period when their parents still intend to have a third. Because that window is relatively small relative to the length of time that families exist to be sampled, most two child families (>90%?) would be gender balanced. Generally, my impression is that parental preferences for one or the other sex (or for gender balance) are generally out of bounds in these kinds of questions because we're supposed to assume platonicly perfect family generating processes with exact 50/50 odds, and no parental biases, and so on. My impression is that cultural literacy is supposed to supply the platonic model. If non-platonic assumptions are operating then different answers are expected as different people bring in different evidence (like probabilities of lying and so forth). If real world factors sneak in later with platonic assumptions allowed to stand then its a case of a bad teacher who expects you to guess the password of precisely which evidence they want to be imported, and which excluded. This issue of signaling which evidence to import is kind of subtle, and

We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

I think you said it better earlier when you talked about whether the reduction in incidence outweighs the pain caused by the tactic. For some conditions, if it wasn't for the stigma there would be little-to-nothing unpleasant about it (and we wouldn't need to talk about reducing incidence).

I agree with your general principle, ... (read more)

Sorry I was slow to respond .. busy with other things

My answers:

Q1: I agree with you: 1/3, 1/3, 2/3

Q2. ISB is similar to SSB as follows: fair coin; woken up twice if tails, once if heads; epistemic state reset each day

Q3. ISB is different from SSB as follows: more than one coin toss; same number of interviews regardless of result of coin toss

Q4. It makes a big difference. She has different information to condition on. On a given coin flip, the probability of heads is 1/2. But, if it is tails we skip a day before flipping again. Once she has been wok... (read more)

0kmccarty
Perhaps this is beating a dead horse, but here goes. Regarding your two variants: I agree. When iterated indefinitely, the Markov chain transition matrix is: [ 0 1 0 0 ] [ 1/2 0 1/2 0 ] [ 0 0 0 1 ] [ 1/2 0 1/2 0 ] acting on state vector [ H1 H2 T1 T2 ], where H,T are coin toss outcomes and 1,2 label Monday,Tuesday. This has probability eigenvector [ 1/4 1/4 1/4 1/4 ]; 3 out of 4 states show Tails (as opposed to the coin having been tossed Tails). By the way, we have unbiased sampling of the coin toss outcomes here. If the Markov chain model isn't persuasive, the alternative calculation is to look at the branching probability diagram [http://entity.users.sonic.net/img/lesswrong/sbv1tree.png (SB variant 1)] and compute the expected frequencies of letters in the result strings at each leaf on Wednesdays. This is 0.5 * ( H + T ) + 0.5 * ( T + T ) = 0.5 * H + 1.5 * T. I agree. Monday-Tuesday sequences occur with the following probabilities: HH: 1/4 HT: 1/4 TT: 1/2 Also, the Markov chain model for the iterated process agrees: [ 0 1/2 0 1/2 ] [ 1/2 0 1/2 0 ] [ 0 0 0 1 ] [ 1/2 0 1/2 0 ] acting on state vector [ H1 H2 T1 T2 ] gives probability eigenvector [ 1/4 1/8 1/4 3/8 ] Alternatively, use the branching probability diagram [http://entity.users.sonic.net/img/lesswrong/sbv2tree.png (SB variant 2)] to compute expected frequencies of letters in the result strings, 0.25 * ( H + H ) + 0.25 * ( H + T ) + 0.5 * ( T + T ) = 0.75 * H + 1.25 * T Because of the extra coin toss on Tuesday after Monday Heads, these are biased observations of coin tosses. (Are these credences?) But neither of these two variants is equivalent to Standard Sleeping Beauty or its iterated variants ISB and ICSB. (Sigh). I don't think your branching probability diagram is correct. I don't know what other reasoning you are using. This is the diagram I have for Standard Sleeping Beauty [http://entity.users.sonic.net/img/lesswrong/ssbtree.png (Standard SB)] A
0kmccarty
Thanks for your response. I should have been clearer in my terminology. By "Iterated Sleeping Beauty" (ISB) I meant to name the variant that we here have been discussing for some time, that repeats the Standard Sleeping Beauty problem some number say 1000 of times. In 1000 coin tosses over 1000 weeks, the number of Heads awakenings is 1000 and the number of Tails awakenings is 2000. I have no catchy name for the variant I proposed, but I can make up an ugly one if nothing better comes to mind; it could be called Iterated Condensed Sleeping Beauty (ICSB). But I'll assume you meant this particular variant of mine when you mention ISB. You say "More than one coin toss" is the iterated part. As far as I can see, and I've argued it a couple times now, there's no essential difference between SSB and ISB, so I meant to draw a comparison between my variant and ISB. "Same number of interviews regardless of result of coin toss" isn't correct. Sorry if I was unclear in my description. Beauty is interviewed once per toss when Heads, twice when Tails. This is the same in ICSB as in Standard and Iterated Sleeping Beauty. Is there an important difference between Standard Sleeping Beauty and Iterated Sleeping Beauty, or is there an important difference between Iterated Sleeping Beauty and Iterated Condensed Sleeping Beauty? We not only skip a day before tossing again, we interview on that day too! I see how over time Beauty gains evidence corroborating the fairness of the coin (that's exactly my later rhetorical question), but assuming it's a fair coin, and barring Type I errors, she'll never see evidence to change her initial credence in that proposition. In view of this, can you explain how she can use this information to predict with better than initial accuracy the likelihood that Heads was the most recent outcome of the toss? I don't see how. After relabeling Monday and Tuesday to Day 1 and Day 2 following the coin toss, Tuesday&Heads (H2) exists in none of these variants

My NT 'data' are from conversations I've had over the years with people who I have noticed are particularly good socially. But of course, there is plenty of between person variability even within NT and AS groups.

The thing that I have been most surprised by is how much NTs like symbols and gestures.

Here are some examples:

  • Suppose you think your significant other should have a cake on his/her birthday. You are not good at baking. Aspie logic: "It's better to buy a cake from a bakery than to make it myself, since the better the cake tastes the happier they'll be." Of course, the correct answer is that the effort you put into it is what matters (to an NT).

  • Suppose you are walking through a doorway and you are aware that there is someone about 20 fee

... (read more)
1A1987dM
There's also a difference between Ask and Guess cultures in this kind of things.
9Nanani
It's worth pointing out that all three examples are highly culturally variable. The "aspie logic" example behaviour is far more common where I live (urban Japan). In the first, most people lack the facilities to bake, especially young adults in small apartments or dorms. Buying a cake is the obvious thing to do. That or taking the SO to a cake-serving cafe. In the second, -no one- here holds doors for strangers. I had to train myself out of the habit because it was getting me very strange looks. Similarly, no one says "bless you" or equivalent when strangers sneeze. The rules of courtesy are different. In the third, it's normal here to expect repeated invitations for any occasion. One invitation will be for show, so you invite people you don't expect to make it as well. The key is that people won't actually make plans to attend until two or more invitations have been received. (This is locally variable; some regions and demographics expect three or four invites. Think of it as a pre-event version of the British quirk where one says "We must do this again sometime" while having no actual desire to repeat the encounter.) The bottom line is that the other person's expectations ought to be factored into the logic. Beware generalizing from a sample of one and all that.

In each of these 3 examples the person with AS is actually being considerate

I agreed with all of your comment but this: the person with AS is not "being considerate", when "being considerate" is defined to include modeling the likely preferences of the person you are supposedly "considering."

In each case, the "consideration" is considering themselves, in the other person's shoes, falling prey to availability bias.

Personally, I am very torn on the doorway example -- I usually make an effort to hold the door, but am... (read more)

3pwno
Your time and effort can be used to give status. By sending a reliable signal you've wasted time and effort for a friend, you're giving your friend good evidence they have some power over you - a feeling much sweeter than a store-bought cake.
0[anonymous]
I have the feeling you are talking about quite untypical NT people here (except maybe for example 3). Around me you would have defined "NT people" (even the term sounds strange to me) as being Aspies. That doesn't add up.

Yes, I've read that paper, and disagree with much of it. Perhaps I'll take the time to explain my reasoning sometime soon

Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.

If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.

Imagine that there are balls in an urn, labeled with numbers 1, 2,.... (read more)

4Cyan
What you have labeled anthropic reasoning is actually straight-up Bayesian reasoning. Wikipedia has an article on the problem, but only discusses the Bayesian approach briefly and with no depth. Jaynes also talks about it early in PT:LOS. In any event, to see the logic of the math, just write down the likelihood function and any reasonable prior.
0CarlShulman
I suggest reading Radford Neal.

This is interesting. We shouldn't get a discontinuous jump.

Consider 2 related situations:

  1. if Heads she is woken up on Monday, and the experiment ends on Tuesday. If tails, she is woken up on Monday and Tuesday, and the experiment ends on Wed. In this case, there is no 'not awake' option.

  2. If heads she is woken up on Monday and Tuesday. On Monday she is asked her credence for heads. On Tuesday she is told "it's Tuesday and heads" (but she is not asked about her credence; that is, she is not interviewed). If tails, it's the usual woken up b

... (read more)
0Morendil
My reasoning has been to consider scenario 1 from the perspective of an outside observer, who is uncertain about each variable: a) whether it is Monday or Tuesday, b) how the coin came up, c) what happened to Beauty on that day. To that observer, "Tuesday and heads" is definitely a possibility, and it doesn't really matter how we label the third variable: "woken", "interviewed", whatever. If the experiment has ended, then that's a day where she hasn't been interviewed. If the outside observer learns that Beauty hasn't been interviewed today, then they may conclude that it's Tuesday and that the coin came up heads, thus a) they have something to update on and b) that observer must assign probability mass to "Tuesday & Heads & not interviewed". If the outside observer learns that Beauty has been interviewed, it seems to me that they would infer that it's more likely, given their prior state of knowledge, that the coin came up heads. To the outside observer, scenario 2 isn't really distinct from scenario 1. The difference only makes a difference to Beauty herself. However, I see no reason to treat Beauty herself differently than an outside observer, including the possibility of updating on being interviewed or on not being interviewed. So, if my probability tables are correct for an outside observer, I'm pretty sure they're correct for Beauty. (My confidence in the table themselves, however, has been eroded a little by my not being able to calculate Beauty - or an observer - updating on a new piece of information in the "fuzzy" variant, e.g. using P(heads|woken) as a prior probability and updating on learning that it is in fact Tuesday. It seems to me that for the math to check out requires that this operation should recover the "absent-minded experimenter" probability for "tuesday & heads & woken". But I'm having a busy week so far and haven't had much time to think about it.)

At this point, it is just assertion that it's not a probability. I have reasons for believing it's not one, at least, not the probability that people think it is. I've explained some of that reasoning.

I think it's reasonable to look at a large sample ratio of counts (or ratio of expected counts). The best way to do that, in my opinion, is with independent replications of awakenings (that reflect all possibilities at an awakening). I probably haven't worded this well, but consider the following two approaches. For simplicity, let's say we wanted to do... (read more)

0kmccarty
Yet one more variant. On my view it's structurally and hence statistically equivalent to Iterated Sleeping Beauty, and I present an argument that it is. This one has the advantage that it does not rely on any science fictional technology. I'm interested to see if anyone can find good reasons why it's not equivalent. The Iterated Sleeping Beaty problem (ISB) is the original Standard Sleeping Beauty (SSB) problem repeated a large number N of times. People always seem to want to do this anyway with all the variations, to use the Law of Large Numbers to gain insight to what they should do in the single shot case. The Setup * As before, Sleeping Beauty is fully apprised of all the details ahead of time. * The experiment is run for N consecutive days (N is a large number). * At midnight 24 hours prior to the start of the experiment, a fair coin is tossed. * On every subsequent night, if the coin shows Heads, it is tossed again; if it shows Tails, it is turned over to show Heads. (This process is illustrated by a discrete-time Markov chain with transition matrix: [1/2 1/2] = P [ 1 0 ] and the state vector is the row x = [ Heads Tails ], with consecutive state transitions computed as x * P^k Each morning when Sleeping Beauty awakes, she is asked each of the following questions: 1. "What is your credence that the most recent coin toss landed Heads?" 2. "What is your credence that the coin was tossed last night?" 3. "What is your credence that the coin is showing Heads now?" The first question is the equivalent of the question that is asked in the Standard Sleeping Beauty problem. The second question corresponds to the question "what is your credence that today is Monday?" (which should also be asked and analyzed in any treatment of the Standard Sleeping Beauty problem.) Note: in this setup, 3) is different than 1) only because of the operation of turning the coin over instead of tossing it. This is just a perhaps too clever mechanism to count down th
0kmccarty
Two ways to iterate the experiment: and This seems a distinction without a difference. The longer the iterated SB process continues, the less important is the distinction between counting tosses versus counting awakenings. This distinction is only about a stopping criterion, not about the convergent behavior of observations or coin tosses to expected values as it's ongoing. Considered as an ongoing process of indefinite duration, the expected number of tosses and of observations of each type are well-defined, easily computed, and well-behaved with respect to each other. Over the long run, #awakenings accumulates 1.5 times more frequently than #tosses. Beauty is never more than two awakenings away from starting a new coin toss, so whether you choose to stop as soon as an awakening has completed or until you finish a coin-toss cycle, the relative perturbation in the statistics collected so far goes to zero. Briefly, there is no "natural" unit of replication independent of observer interest. This would be an error. You are assigning a 50% probability to an observation (that it is Heads&Monday) without taking into account the bias that's built in to the process for Beauty to make observations. Alternatively, if you are uncertain whether Monday is true or not--you know it might be Tuesday--then you should be uncertain that P(Heads)=P(Heads&Monday). You the outside observer know the chance of observing that the coin lands Heads is 50%. You presumably know this because you have corroborated it through an unbiased observation process: look at the coin exactly once per toss. Once Beauty is put to sleep and awoken, she is no longer an outside observer, she is a particpant in a biased observation process, so she should update her expectation about what her observation process will show. Different observation process, different observations, different likelhoods of what she can expect to see. Of course, as a card-carrying thirder, I'm assuming that the question about crede
0Morendil
Consider the case of Sleeping Beauty with an absent-minded experimenter. If the coin comes up Heads, there is a tiny but non-zero chance that the experimenter mixes up Monday and Tuesday. If the coin comes up Tails, there is a tiny but non-zero chance that the experimenter mixes up Tails and Heads. The resulting scenario is represented in a new sheet, Fuzzy two-day, of my spreadsheet document. Under these assumptions, Beauty may no longer rule out Tuesday & Heads. She has no justification to assign all of the Heads probability mass to Monday & Heads. She is therefore constrained to conditioning on being woken in the way that the usual two-day variant suggests she should, and ends up with a credence arbitrarily close to 1/3 if we make the "absent-minded" probability tiny enough. Why should we get a discontinuous jump to 1/2 as this becomes zero?

The probability represents how she should see things when she wakes up.

She knows she's awake. She knows heads had probability 0.5. She knows that, if it landed heads, it's Monday with probability 1. She knows that, if it landed tails, it's either Monday or Tuesday. Since there is no way for her to distinguish between the two, she views them as equally likely. Thus, if tails, it's Monday with probability 0.5 and Tuesday with probability 0.5.

0Jonathan_Graehl
Okay, I now understand what you mean by that tree.
Load More