Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwern 17 July 2014 09:16:18PM *  1 point [-]

Zuehlke, T. (2003). "Estimation of a Tobit model with unknown censoring threshold". Applied Economics 35,1163–9 (this is for a little analysis: https://plus.google.com/103530621949492999968/posts/TG98DXkHrrs )

Comment author: VincentYu 19 July 2014 04:28:30AM 1 point [-]
Comment author: gwern 05 July 2014 01:21:35AM *  3 points [-]

I would love a name for this too since the observation is important for why 'small' differences in means for normally distributed populations can have large consequences, and this occurs in many contexts (not just IQ or athletics).

Also good would be a quick name for log-normal distribution-like phenomenon.

The normal distribution can be seen as the sum of lots of independent random variables; so for example, IQ is normally distributed because the genetics is a lot of small additive variables. The log-normal is when it's the multiple of lots of independent variables; so any process where each step is necessary, as has been proposed for scientific productivity in having multiple steps like ideas->research->publication.

The normal distribution has the unintuitive behavior that small changes in the mean or variance have large consequences out on the thin tails. But the log-normal distribution has the unintuitive behavior that small improvements in each of the independent variables will yield large changes in their product, and that the extreme datapoints will be far beyond the median or average datapoints. ('Compound interest' comes close but doesn't seem to catch it because it refers to increase over time.)

Comment author: VincentYu 07 July 2014 01:47:51AM 2 points [-]

IQ is normally distributed because the genetics is a lot of small additive variables.

IQ is normally distributed because the distribution of raw test scores is standardized to a normal distribution.

Comment author: JoshuaFox 09 June 2014 08:27:15AM 0 points [-]

Fox, J. (2014). Intelligence and rationality. PSYCHOLOGIST, 27(3), 143-143. (BRITISH PSYCHOLOGICAL SOC.)

This popped up on my Google Scholar. Unless I wrote it in my sleep, that's not me, but I am curious.

Comment author: VincentYu 11 June 2014 01:43:42AM 2 points [-]

Here.

The article to which this letter is responding to is Stanovich and West (2014).

Comment author: JoshuaFox 09 June 2014 08:27:15AM 0 points [-]

Fox, J. (2014). Intelligence and rationality. PSYCHOLOGIST, 27(3), 143-143. (BRITISH PSYCHOLOGICAL SOC.)

This popped up on my Google Scholar. Unless I wrote it in my sleep, that's not me, but I am curious.

Comment author: VincentYu 10 June 2014 04:56:43AM 1 point [-]

Requested.

Comment author: Pablo_Stafforini 26 May 2014 11:31:14AM 0 points [-]

Seymour Drescher, Capitalism and Antislavery: British Mobilization in Comparative Perspective, pp. 67-75.

Comment author: VincentYu 31 May 2014 01:33:48AM 3 points [-]
Comment author: Pablo_Stafforini 26 May 2014 11:31:14AM 0 points [-]

Seymour Drescher, Capitalism and Antislavery: British Mobilization in Comparative Perspective, pp. 67-75.

Comment author: VincentYu 30 May 2014 02:05:34PM 2 points [-]

Requested.

Comment author: gwern 29 April 2014 03:16:20AM 0 points [-]

For a Feynman mystery:

Comment author: VincentYu 02 May 2014 02:13:35AM *  3 points [-]
Comment author: Emile 01 May 2014 09:22:14PM *  1 point [-]

Dude, I'm being genuinely curious about what "holy wars" he's talking about. So far I got:

  • a definition of "holy war" in this context
  • a snotty "shut up, only statisticians are allowed to talk about this topic"

... but zero actual answers, so I can't even tell if he's talking about some stupid overblown bullshit, or if he's just exaggerating what is actually a pretty low-key difference in opinion.

Comment author: VincentYu 02 May 2014 01:37:19AM *  6 points [-]

A "holy war" between Bayesians and frequentists exists in the modern academic literature for statistics, machine learning, econometrics, and philosophy (this is a non-exhaustive list).

Bradley Efron, who is arguably the most accomplished statistician alive, wrote the following in a commentary for Science in 2013 [1]:

The term "controversial theorem" sounds like an oxymoron, but Bayes' theorem has played this part for two-and-a-half centuries. Twice it has soared to scientific celebrity, twice it has crashed, and it is currently enjoying another boom. The theorem itself is a landmark of logical reasoning and the first serious triumph of statistical inference, yet is still treated with suspicion by most statisticians. There are reasons to believe in the staying power of its current popularity, but also some signs of trouble ahead.

[...]

Bayes' 1763 paper was an impeccable exercise in probability theory. The trouble and the subsequent busts came from overenthusiastic application of the theorem in the absence of genuine prior information, with Pierre-Simon Laplace as a prime violator. Suppose that in the twins example we lacked the prior knowledge that one-third of twins are identical. Laplace would have assumed a uniform distribution between zero and one for the unknown prior probability of identical twins, yielding 2/3 rather than 1/2 as the answer to the physicists' question. In modern parlance, Laplace would be trying to assign an "uninformative prior" or "objective prior", one having only neutral effects on the output of Bayes' rule. Whether or not this can be done legitimately has fueled the 250-year controversy.

Frequentism, the dominant statistical paradigm over the past hundred years, rejects the use of uninformative priors, and in fact does away with prior distributions entirely. In place of past experience, frequentism considers future behavior. An optimal estimator is one that performs best in hypothetical repetitions of the current experiment. The resulting gain in scientific objectivity has carried the day, though at a price in the coherent integration of evidence from different sources, as in the FiveThirtyEight example.

The Bayesian-frequentist argument, unlike most philosophical disputes, has immediate practical consequences.

In another paper published in 2013, Efron wrote [2]:

The two-party system [Bayesian and frequentist] can be upsetting to statistical consumers, but it has been a good thing for statistical researchers — doubling employment, and spurring innovation within and between the parties. These days there is less distance between Bayesians and frequentists, especially with the rise of objective Bayesianism, and we may even be heading toward a coalition government.

The two philosophies, Bayesian and frequentist, are more orthogonal than antithetical. And of course, practicing statisticians are free to use whichever methods seem better for the problem at hand — which is just what I do.

Thirty years ago, Efron was more critical of Bayesian statistics [3]:

A summary of the major reasons why Fisherian and NPW [NeymanPearsonWald] ideas have shouldered Bayesian theory aside in statistical practice is as follows:

  1. Ease of use: Fisher’s theory in particular is well set up to yield answers on an easy and almost automatic basis.
  2. Model building: Both Fisherian and NPW theory pay more attention to the preinferential aspects of statistics.
  3. Division of labor: The NPW school in particular allows interesting parts of a complicated problem to be broken off and solved separately. These partial solutions often make use of aspects of the situation, for example, the sampling plan, which do not seem to help the Bayesian.
  4. Objectivity: The high ground of scientific objectivity has been seized by the frequentists.

None of these points is insurmountable, and in fact, there have been some Bayesian efforts on all four. In my opinion a lot more such effort will be needed to fulfill Lindley’s prediction of a Bayesian 21st century.

The following bit of friendly banter in 1965 between M. S. Bartlett and John W. Pratt shows that the holy war was ongoing 50 years ago [4]:

Bartlett: I am not being altogether facetious in suggesting that, while non-Bayesians should make it clear in their writings whether they are non-Bayesian Orthodox or non-Bayesian Fisherian, Bayesians should also take care to distinguish their various denominations of Bayesian Epistemologists, Bayesian Orthodox and Bayesian Savages. (In fairness to Dr Good, I could alternatively have referred to Bayesian Goods; but, oddly enough, this did not sound so good.)

Pratt: Professor Bartlett is correct in classifying me a Bayesian Savage, though I might take exception to his word order. On the whole, I would rather be called a Savage Bayesian than a Bayesian Savage. Of course I can quite see that Professor Bartlett might not want to admit the possibility of a Good Bayesian.

For further reading I recommend [5], [6], [7].

[1]: Efron, Bradley. 2013. “Bayes’ Theorem in the 21st Century.” Science 340 (6137) (June 7): 1177–1178. doi:10.1126/science.1236536.

[2]: Efron, Bradley. 2013. “A 250-Year Argument: Belief, Behavior, and the Bootstrap.” Bulletin of the American Mathematical Society 50 (1) (April 25): 129–146. doi:10.1090/S0273-0979-2012-01374-5.

[3]: Efron, B. 1986. “Why Isn’t Everyone a Bayesian?” American Statistician 40 (1) (February): 1–11. doi:10.1080/00031305.1986.10475342.

[4]: Pratt, John W. 1965. “Bayesian Interpretation of Standard Inference Statements.” Journal of the Royal Statistical Society: Series B (Methodological) 27 (2): 169–203. http://www.jstor.org/stable/2984190.

[5]: Senn, Stephen. 2011. “You May Believe You Are a Bayesian but You Are Probably Wrong.” Rationality, Markets and Morals 2: 48–66. http://www.rmm-journal.com/htdocs/volume2.html.

[6]: Gelman, Andrew. 2011. “Induction and Deduction in Bayesian Data Analysis.” Rationality, Markets and Morals 2: 67–78. http://www.rmm-journal.com/htdocs/volume2.html.

[7]: Gelman, Andrew, and Christian P. Robert. 2012. “‘Not Only Defended but Also Applied’: The Perceived Absurdity of Bayesian Inference”. Statistics; Theory. arXiv (June 28).

Comment author: gwern 28 April 2014 03:03:36AM *  0 points [-]
Comment author: VincentYu 29 April 2014 03:13:06AM 2 points [-]
Comment author: D_Malik 08 April 2014 07:05:35PM *  15 points [-]

Should we listen to music? This seems like a high-value thing to think about.* Some considerations:

  • Music masks distractions. But we can get the same effect through alternatives such as white noise, calming environmental noise, or ambient social noise.

  • Music creates distractions. It causes interruptions. It forces us to switch our attention between tasks. For instance, listening to music while driving increases the risk of accidents.

  • We seem to enjoy listening to music. Anecdotally, when I've gone on "music fasts", music starts to sound much better and I develop cravings for music. This may indicate that this is a treadmill system, such that listening to music does not produce lasting improvements in mood. (That is, if enjoyment stems from relative change in quality/quantity of music and not from absolute quality/quantity, then we likely cannot obtain a lasting benefit.)

  • Frequency of music-listening correlates (.18) with conscientiousness. I'd guess the causation's in the wrong direction, though.

  • Listening to random music (e.g. a multi-genre playlist on shuffle) will randomize emotion and mindstate. Entropic influences on sorta-optimized things (e.g. mindstate) are usually harmful. And the music-listening people do nowadays is very unlike EEA conditions, which is usually bad.

(These are the product of 30 minutes of googling; I'm asking you, not telling you.)

Here are some ways we could change our music-listening patterns:

  • Music modifies emotion. We could use this to induce specific useful emotions. For instance, for productivity, one could listen to a long epic music mix.

  • Stop listening to music entirely, and switch to various varieties of ambient noise. Moderate ambient noise seems to be best for thinking.

  • Use music only as reinforcement for desired activities. I wrote a plugin to implement this for Anki. Additionally, music benefits exercise, so we might listen to music only at the gym. The treadmill-like nature of music enjoyment (see above) may be helpful here, as it would serve to regulate e.g. exercise frequency - infrequent exercise would create music cravings which would increase exercise frequency, and vice versa.

  • Listen only to educational music. Unfortunately, not much educational music for adults exists. We could get around this by overlaying regular music with text-to-speeched educational material or with audiobooks.

* I've been doing quantitative attention-allocation optimization lately, and "figure out whether to stop listening to music again" has one of the highest expected-utilons-per-time of all the interventions I've considered but not yet implemented.

Comment author: VincentYu 08 April 2014 10:22:07PM 10 points [-]

I went through the literature on background music in September 2012; here is a dump of 38 paper references. Abstracts can be found by searching here and I can provide full texts on request.

Six papers that I starred in my reference manager (with links to full texts):

One-word summary of the academic literature on the effects of listening to background music (as of September 2012): unclear

View more: Next