Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Cyan 27 July 2014 08:04:55PM 4 points [-]

Just as markets are anti-inductive, it turns out that markets reverse the "tails come apart" phenomenon found elsewhere. When times are "ordinary", performance in different sectors is largely uncorrelated, but when things go to shit, they go to shit all together, a phenomenon termed "tail dependence".

In response to comment by Cyan on Too good to be true
Comment author: V_V 20 July 2014 03:36:19PM 1 point [-]


So if I set size at 5%, collect the data, and run the test, and repeat the whole experiment with fresh data multiple times, should I expect that, if the null hypothesis is true, the test accepts exactly %5 of times, or at most 5% of times?

In response to comment by V_V on Too good to be true
Comment author: Cyan 20 July 2014 04:10:27PM 2 points [-]

If the null hypothesis is simple (that is, if it picks out a single point in the hypothesis space), and the model assumptions are true blah blah blah, then the test (falsely) rejects the null with exactly 5% probability. If the null is composite (comprises a non-singleton subset of parameter space), and there is no nice reduction to a simple null via mathematical tricks like sufficiency or the availability of a pivot, then the test falsely rejects the null with at most 5% probability.

But that's all very technical; somewhat less technically, almost always, a bootstrap procedure is available that obviates these questions and gets you to "exactly 5%"... asymptotically. Here "asymptotically" means "if the sample size is big enough". This just throws the question onto "how big is big enough," and that's context-dependent. And all of this is about one million times less important than the question of how well each study addresses systematic biases, which is an issue of real, actual study design and implementation rather than mathematical statistical theory.

Comment author: Algernoq 16 July 2014 07:06:51AM 2 points [-]

Compared to a martial arts club, LW goals are typically more all-consuming. Martial arts is occasionally also about living well, while LW encourages optimizing all aspects of life.

Comment author: Cyan 16 July 2014 04:38:16PM *  1 point [-]

Sure, that's a distinction, but to the extent that one's goals include making/maintaining social connections with people without regard to their involvement in LW so as to be happy and healthy, it's a distinction that cuts against the idea that "involvement in LW pulls people away from non-LWers".

This falls under the utility function is not up for grabs. It finds concrete expression in the goal factoring technique as developed by CFAR, which is designed to avoid failure modes like, e.g., cutting out the non-LWers one cares about due to some misguided notion that that's what "rationality" requires.

Comment author: buybuydandavis 14 July 2014 07:19:25AM *  22 points [-]

LW has a cult-like social structure. ...

Where the evidence for this is:

Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.

Seems a rather gigantic net to cast for "cults".

Comment author: Cyan 14 July 2014 02:32:07PM 9 points [-]

Well, there's this:

However, involvement in LW pulls people away from non-LWers.

But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.

Comment author: V_V 14 July 2014 10:46:51AM 1 point [-]

According to Wikipedia:

In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a predetermined significance level, often 0.05[3][4] or 0.01.

In response to comment by V_V on Too good to be true
Comment author: Cyan 14 July 2014 02:23:29PM 1 point [-]

You want size, not p-value. The difference is that size is a "pre-data" (or "design") quantity, while the p-value is post-data, i.e., data-dependent.

In response to comment by dvasya on Too good to be true
Comment author: PhilGoetz 11 July 2014 09:18:28PM *  0 points [-]

No; it's standard to set the threshold for your statistical test for 95% confidence. Studies with larger samples can detect smaller differences between groups with that same statistical power.

Comment author: Cyan 12 July 2014 03:00:57AM 9 points [-]

No; it's standard to set the threshold for your statistical test for 95% confidence. That's its statistical power.

"Power" is a statistical term of art, and its technical meaning is neither 1 - alpha nor 1 - p.

Comment author: army1987 10 July 2014 04:28:16PM 0 points [-]

I'm pointing out that your list isn't complete,

It ends with “etc.” for Pete's sake!

Comment author: Cyan 10 July 2014 04:31:01PM 1 point [-]

...no it doesn't?

Comment author: IlyaShpitser 09 July 2014 06:46:54PM *  6 points [-]

I agree with gwern's decision to separate statistical issues from issues which arise even with infinite samples. Statistical issues are also extremely important, and deserve careful study, however we should divide and conquer complicated subjects.

Comment author: Cyan 09 July 2014 07:05:31PM 6 points [-]

I also agree -- I'm recommending that he make that split clearer to the reader by addressing it up front.

Comment author: gwern 09 July 2014 03:21:44PM 4 points [-]

You are fighting the hypothetical. In the least convenient possible world where no dataset is smaller than a petabyte and no one has ever heard of sampling error, would you magically be able to spin the straw of correlation into the gold of causation? No. Why not? That's what I am discussing here.

Comment author: Cyan 09 July 2014 05:45:39PM *  3 points [-]

I suggest you move that point closer to the list of 3 possibilities -- I too read that list and immediately thought, "...and also coincidence."

The quote you posted above ("And we can't explain away...") is an unsupported assertion -- a correct one in my opinion, but it really doesn't do enough to direct attention away from false positive correlations. I suggest that you make it explicit in the OP that you're talking about a hypothetical in which random coincidences are excluded from the start. (Upvoted the OP FWIW.)

(Also, if I understand it correctly, Ramsey theory suggests that coincidences are inevitable even in the absence of sampling error.)

Comment author: Cyan 07 July 2014 01:08:41PM 4 points [-]

What happened to Will Newsome's drunken HPMOR send-up? Did it get downvoted into oblivion?

View more: Next