Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sarunas 23 July 2015 10:22:36PM 4 points [-]
Comment author: Cyan 20 August 2015 09:56:19PM 0 points [-]

Thanks for the sci-hub link. So awesome!

Comment author: FrameBenignly 19 July 2015 06:49:21PM *  0 points [-]

I was struggling to word the doctor parapgraph in a manner which was succinct but still got the idea across. I think query worded it better.

On math curriculum, that advanced classes build off of calculus is a function of current design. They could recenter courses around statistics and have calculus be an extension of it. Some of the calculus course would need to be reincorporated into the stats courses, but a lot of it wouldn't. You're going to have a hard time convincing me that trigonometry a̶n̶d̶ ̶v̶e̶c̶t̶o̶r̶s̶ are a necessary precursor for regression analysis or Bayes' theorem. The minority of students in physics and engineering that need both calculus and statistics should not dictate how other majors are taught. Fixing the curriculum isn't an easy problem, but they've had more than a century to solve it and there seems to be little movement in this direction.

Comment author: Cyan 26 July 2015 05:38:44PM *  4 points [-]

You're going to have a hard time convincing me that... vectors are a necessary precursor for regression analysis...

So you're fitting a straight line. Parameter estimates don't require linear algebra (that is, vectors and matrices). Super. But the immediate next step in any worthwhile analysis of data is calculating a confidence set (or credible set, if you're a Bayesian) for the parameter estimates; good luck teaching that if your students don't know basic linear algebra. In fact, all of regression analysis, from the most basic least squares estimator through multilevel/hierarchical regression models up to the most advanced sparse "p >> n" method, is built on top of linear algebra.

(Why do I have such strong opinions on the subject? I'm a Bayesian statistician by trade; this is how I make my living.)

In response to comment by [deleted] on Ephemeral correspondence
Comment author: EphemeralNight 28 April 2015 06:47:30PM 0 points [-]

Consciousness is the most recent module, and that does mean that. I'm sorry, I thought this was one point that wasn't even in dispute. It was laid out pretty clearly in the Evolution Sequence:

Complex adaptations take a very long time to evolve. First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool. Only then can another allele B, which depends on A, begin rising to fixation. A fur coat is not a strong advantage unless the environment has a statistically reliable tendency to throw cold weather at you. Well, genes form part of the environment of other genes, and if B depends on A, B will not have a strong advantage unless A is reliably present in the genetic environment

Evolutions Are Stupid (But Work Anyway)

Comment author: Cyan 28 April 2015 08:11:04PM 3 points [-]

Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]

The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.

Comment author: ChristianKl 14 April 2015 02:23:01PM *  4 points [-]

I think the "The Twelve Virtues of Rationality" actually makes an argument that those things are virtues.

It's start is also quite fitting: "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth."

It argues against the frame of vows.

Withdrawing into mysticism where everything goes is bad. Obfuscating is bad. It's quite easy to say something that gives rationalist applause lights. Critical thinking and actually thinking through the implications of using the frame of a vow is harder. Getting less wrong about what it happens to think rational is hard.

Mystic writing that's too vague to be questioned doesn't really have a place here.

Comment author: Cyan 14 April 2015 02:48:16PM 1 point [-]

Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".

Comment author: ChristianKl 13 April 2015 06:38:48PM -1 points [-]

I believe that's why So8res referred to it as a vow to yourself, not anyone else.

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

This seems like a willful misreading of the essay's point. It seems obvious from context that So8res is referring here to motivated cognition, which does indeed have something wrong with it.

I consider basics to be important. If we allow vague statements about basic principles of rationality to stand we don't improve our understanding of rationality.

Willing is not the problem of motivated cognition. Having desires for reality to be different is not the problem. You don't need to be a straw vulcan without any desire or will to be rational.

Furthermore "Shut up and do the impossible" from the sequences is about "trying to will reality into being a certain way".

Comment author: Cyan 14 April 2015 01:46:54PM 2 points [-]

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.

Comment author: IlyaShpitser 14 February 2015 06:46:24PM *  4 points [-]

Look at his latest post: "hey wait a second, there is bias by censoring!" The "hard/conceptual part" is structuring the problem in the right way to notice something is wrong, the "bookkeeping" part is e.g. Kaplan-Meier / censoring-adjustment-via-truncation.

Comment author: Cyan 19 February 2015 01:06:24AM *  2 points [-]

I don't disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.

(Apropos of nothing, the work "bookkeeping" has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)

Comment author: IlyaShpitser 13 February 2015 12:26:38PM *  6 points [-]

Causal stories in particular.

I actually disagree that having a good intuitive grasp of "stories" of this type is not a math thing, or a part of the descriptive statistics magisterium (unless you think graphical models are descriptive statistics). "Oh but maybe there is confounder X" quickly becomes a maze of twisty passages where it is easy to get lost.


"Math things" is thinking carefully.


I think equating lots of derivation mistakes or whatever with poor math ability is: (a) toxic and (b) wrong. I think the innate ability/genius model of successful mathematicians is (a) toxic and (b) wrong. I further think that a better model for a successful mathematician is someone who is past a certain innate ability threshold who has the drive to keep going and the morale to not give up. To reiterate, I believe for most folks who post here the dominating term is drive and morale, not ability (of course drive and morale are also partly hereditary).

Comment author: Cyan 13 February 2015 04:25:56PM 2 points [-]

I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott's variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.

Comment author: IlyaShpitser 12 February 2015 09:16:39AM *  10 points [-]

Thanks for writing this post, and specifically for trying to change Scott's mind. Scott's complaints about his math abilities often go like this:

"Man, I wish I wasn't so terrible at math. Now if you will excuse me, I am going to tear the statistical methodology in this paper to pieces."


Put me in as yet another "clearly not in the genius category" person in a somewhat mathy area awaiting the rest of this series. I think a lot about what "mathematical sophistication" is, I am curious what your conclusions are.


I think mathematical sophistication gets you a lot of what is called "rationality skills" here for free, basically.

Comment author: Cyan 12 February 2015 08:29:20PM *  8 points [-]

Scott's technique for shredding papers' conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That's not really a math thing, and it plays right to his strengths.

Comment author: AndHisHorse 23 January 2015 07:30:24PM 2 points [-]

Why is this a rationality quote?

Comment author: Cyan 23 January 2015 07:47:51PM *  1 point [-]

Maybe for the bit about signalling in the last paragraph...? Just guessing here; perhaps Kawoomba will fill us in.

Comment author: XFrequentist 08 January 2015 05:18:25AM 4 points [-]

I call forth the mighty Cyan!

Comment author: Cyan 08 January 2015 06:05:09PM *  8 points [-]

I like it when I can just point folks to something I've already written.

The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predicts absolute disaster (or any result) just for that specific data set, and is otherwise well-behaved.

As the authors themselves put it:

The mechanism causing this “brittleness” has its origin in the fact that, in classical Bayesian Sensitivity Analysis, optimal bounds on posterior values are computed after the observation of the specific value of the data, and that the probability of observing the data under some feasible prior may be arbitrarily small... This data dependence of worst priors is inherent to this classical framework and the resulting brittleness under finite-information can be seen as an extreme occurrence of the dilation phenomenon (the fact that optimal bounds on prior values may become less precise after conditioning) observed in classical robust Bayesian inference.

View more: Next