Comment author: Vaniver 27 June 2015 12:35:23AM 9 points [-]

Thanks for the detailed response! I'll respond to a handful of points:

Previously "ignorant" people feel the community has opened a new world to them, they lived in darkness before, but now they found the "Way" ("Bayescraft") and all this stuff is becoming an identity for them.

I certainly agree that there are people here who match that description, but it's also worth pointing out that there are actual experts too.

the general public, who are just irrational automata still living in the dark.

One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they're dumb.

It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I'd prefer authors who actively try to minimize the need for links and jargon.

I'm not sure this is avoidable, and in full irony I'll link to the wiki page that explains why.

In general, there are lots of concepts that seem useful, but the only way we have to refer to concepts is either to refer to a label or to explain the concept. A number of people read through the sequences and say "but the conclusions are just common sense!", to which the response is, "yes, but how easy is it to communicate common sense?" It's one thing to be able to recognize that there's some vague problem, and another thing to be able to say "the problem here is inferential distance; knowledge takes many steps to explain, and attempts to explain it in fewer steps simply won't work, and the justification for this potentially surprising claim is in Appendix A." It is one thing to be able to recognize a concept as worthwhile; it is another thing to be able to recreate that concept when a need arises.

Now, I agree with you that having different labels to refer to the same concept, or conceptual boundaries or definitions that are drawn slightly differently, is a giant pain. When possible, I try to bring the wider community's terminology to LW, but this requires being in both communities, which limits how much any individual person can do.

I also don't get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism.

Part of that is just seeding effects--if you start a rationality site with a bunch of people interested in transhumanism, the site will remain disproportionately linked to transhumanism because people who aren't transhumanists will be more likely to leave and people who are transhumanists will be more likely to find and join the site.

Part of it is that those are the cluster of ideas that seem weird but 'hold up' under investigation--most of the reasons to believe that the economy of fifty years from now will look like the economy of today are just confused, and if a community has good tools for dissolving confusions you should expect them to converge on the un-confused answer.

A final part seems to be availability; people who are convinced by the case for cryonics tend to be louder than the people who are unconvinced. The annual surveys show the perception of LW one gets from just reading posts (or posts and comments) is skewed from the perception of LW one gets from the survey results.

Comment author: JonahSinick 27 June 2015 01:46:52AM *  3 points [-]

One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they're dumb.

I agree that LW is much better than RationalWiki, but I still think that the norms for discussion are much too far in the direction of focus on how other commenters are wrong as opposed to how one might oneself be wrong.

I know that there's a selection effect (with respect to the more frustrating interactions standing out). But people not infrequently mistakenly believe that I'm wrong about things that I know much more about than they do, with very high confidence, and in such instances I find the connotations that I'm unsound to be exasperating.

I don't think that this is just a problem for me rather than a problem for the community in general: I know a number of very high quality thinkers in real life who are uninterested in participating on LW explicitly because they don't want to engage with commenters who are highly confident that their own positions are incorrect. There's another selection effect here: such people aren't salient because they're invisible to the online community.

Comment author: minusdash 26 June 2015 10:20:44PM 3 points [-]

Those are indeed impressive things you did. I agree very much with your post from 2010. But the fact that many people have this initial impression shows that something is wrong. What makes it look like a "twilight zone"? Why don't I feel the same symptoms for example on Scott Alexander's Slate Star Codex blog?

Another thing I could pinpoint is that I don't want to identify as a "rationalist", I don't want to be any -ist. It seems like a tactic to make people identify with a group and swallow "the whole package". (I also don't think people should identify as atheist either.)

Comment author: JonahSinick 27 June 2015 01:33:50AM 3 points [-]

I'm sympathetic to everything you say.

In my experience there's an issue of Less Wrongers being unusually emotionally damaged (e.g. relative to academics) and this gives rise to a lot of problems in the community. But I don't think that the emotional damage primarily comes from the weird stuff that you see on Less Wrong. What one sees is them having born the brunt of the phenomenon that I described here disproportionately relative to other smart people, often because they're unusually creative and have been marginalized by conformist norms

Quite frankly, I find the norms in academia very creepy: I've seen a lot of people develop serious mental health problems in connection with their experiences in academia. It's hard to see it from the inside: I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you'd like) .

Comment author: minusdash 26 June 2015 07:14:26PM 6 points [-]

You asked about emotional stuff so here is my perspective. I have extremely weird feelings about this whole forum that may affect my writing style. My view is constantly popping back and forth between different views, like in the rabbit-duck gestalt image. On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay "sane" here. Maybe I should refrain from commenting. It's a pity because I'm generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away. On the other hand I feel an urge to check out the posts despite this effect. I'm not sure what aspect of certain forums have this psychological effect on my thinking, but I've felt it on various reddit communities as well.

Comment author: JonahSinick 26 June 2015 10:04:21PM *  3 points [-]

Thanks so much for sharing. I'm astonished by how much more fruitful my relationships have became since I've started asking.

I think that a lot of what you're seeing is a cultural clash: different communities have different blindspots and norms for communication, and a lot of times the combination of (i) blindspots of the communities that one is familiar with and (ii) respects in which a new community actually is unsound can give one the impression "these people are beyond the pale!" when the actual situation is that they're no less rational than members of one's own communities.

I had a very similar experience to your own coming from academia, and wrote a post titled The Importance of Self-Doubt in which I raised the concern that Less Wrong was functioning as a cult. But since then I've realized that a lot of the apparently weird beliefs on LWers are in fact also believed by very credible people: for example, Bill Gates recently expressed serious concern about AI risk.

If you're new to the community, you're probably unfamiliar with my own credentials which should reassure you somewhat:

  • I did a PhD in pure math under the direction of Nathan Dunfield, who coauthored papers with Bill Thurston, who formulated the geometrization conjecture which Perelman proved and in doing so won one of the Clay Millennium Problems.

  • I've been deeply involved with math education for highly gifted children for many years. I worked with the person who won the American Math Society prize for best undergraduate research when he was 12.

  • I worked at GiveWell, which partners with with Good Ventures, Dustin Moskovitz's foundation.

  • I've done fullstack web development, making an asynchronous clone of StackOverflow (link).

  • I've done machine learning, rediscovering logistic regression, collaborative filtering, hierarchical modeling, the use of principal component analysis to deal with multicollinearity, and cross validation. (I found the expositions so poor that it was faster for me to work things out on my own than to learn from them, though I eventually learned the official versions).You can read some details of things that I found here. I did a project implementing Bayesian adjustment of Yelp restaurant star ratings using their public dataset here

So I imagine that I'm credible by your standards. There are other people involved in the community who you might find even more credible. For example: (a) Paul Christiano who was an international math olympiad medalist, wrote a 50 page paper on quantum computational complexity with Scott Aaronson as an undergraduate at MIT, and is a theoretical CS grad student at Berkeley. (b) Jacob Steinhardt, a Hertz graduate fellow who does machine learning research under Percy Liang at Stanford.

So you're not actually in some sort of twilight zone. I share some of your concerns with the community, but the groupthink here is no stronger than the groupthink present in academia. I'd be happy to share my impressions of the relative soundness of the various LW community practices and beliefs.

Comment author: RomeoStevens 26 June 2015 06:29:28PM *  3 points [-]

I think having the concept of PCAs prevents some mistakes in reasoning on an intuitive day to day level of reasoning. It nudges me towards fox thinking instead of hedgehog thinking. Normal folk intuition grasps at the most cognitively available and obvious variable to explain causes, and then our System 1 acts as if that variable explains most if not all the variance. Looking at PCAs many times (and being surprised by them) makes me less likely to jump to conclusions about the causal structure of clusters of related events. So maybe I could characterize it as giving a System 1 intuition for not making the post hoc ergo propter hoc fallacy.

Maybe part of the problem Jonah is running in to explaining it is that having done many many example problems with System 2 loaded it into his System 1, and the System 1 knowledge is what he really wants to communicate?

Comment author: JonahSinick 26 June 2015 07:04:55PM *  0 points [-]

Yes, you seem to have a very clear understanding of where I'm coming from. Thanks.

Comment author: JonahSinick 26 June 2015 06:26:56PM *  1 point [-]

See Rationality is about pattern recognition, not reasoning.

Your tone is condescending, far outside of politeness norms. In the past I would have uncharitably written this off to you being depraved, but I've realized that I should be making a stronger effort to understand other people's perspectives. So can you help me understand where you're coming from on an emotional level?

Comment author: JonahSinick 26 June 2015 06:36:30PM *  0 points [-]

See my edit. Part of where I'm coming from is realizing how socially undeveloped people's in our reference class are tend to be, such that apparent malice often comes from misunderstandings.

Comment author: minusdash 26 June 2015 03:29:36PM *  5 points [-]

Qualitative day-to-day dimensionality reduction sounds like woo to me. Not a bit more convincing than quantum woo (Deepak Chopra et al.). Whatever you're doing, it's surely not like doing SVD on a data matrix or eigen-decomposition on the covariance matrix of your observations.

Of course, you can often identify motivations behind people's actions. A lot of psychology is basically trying to uncover these motivations. Basically an intentional interpretation and a theory of mind are examples of dimensionality reduction in some sense. Instead of explaining behavior by reasoning about receptors and neurons, you imagine a conscious agent with beliefs, desires and intentions. You could also link it to data compression (dimensionality reduction is a sort of lossy data compression). But I wouldn't say I'm using advanced data compression algorithms when playing with my dog. It just sounds pretentious and shows a desperate need to signal smartness.

So, what is the evidence that you are consciously doing something similar to PCA in social life? Do you write down variables and numbers, or how can I imagine qualitative dimensionality reduction. How is it different from somebody just getting an opinion intuitively and then justifying it with afterwards?

Comment author: JonahSinick 26 June 2015 06:26:56PM *  1 point [-]

See Rationality is about pattern recognition, not reasoning.

Your tone is condescending, far outside of politeness norms. In the past I would have uncharitably written this off to you being depraved, but I've realized that I should be making a stronger effort to understand other people's perspectives. So can you help me understand where you're coming from on an emotional level?

Comment author: minusdash 26 June 2015 02:43:42PM 13 points [-]

"impression that more advanced statistics is technical elaboration that doesn't offer major additional insights"

Why did you have this impression?

Sorry for the off-topic, but I see this a lot in LessWrong (as a casual reader). People seem to focus on textual, deep-sounding, wow-inducing expositions, but often dislike the technicalities, getting hands dirty with actually understanding calculations, equations, formulas, details of algorithms etc (calculations that don't tickle those wow-receptors that we all have). As if these were merely some minor additions over the really important big picture view. As I see it this movement seems to try to build up a new backbone of knowledge from scratch. But doing this they repeat the mistakes of the past philosophers. For example going for the "deep", outlook-transforming texts that often give a delusional feeling of "oh now I understand the whole world". It's easy to have wow-moments without actually having understood something new.

So yes, PCA is useful and most statistics and maths and computer science is useful for understanding stuff. But then you swing to the other extreme and say "ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level". Tell me how exactly you're planning to use PCA day-to-day? I think you may mean you want to use some "insight" that you gained from it. But I'm not sure what that would be. It seems to be a cartoonish distortion that makes it fit into an ideology.

Anyway, mainstream machine learning is very useful. And it's usually much more intricate and complicated than to be able to produce a deep everyday insight out of it. I think the sooner you lose the need for everything to resonate deeply or have a concise insightful summary, the better.

Comment author: JonahSinick 26 June 2015 03:02:10PM *  4 points [-]

Why did you have this impression?

Groupthink I guess: other people who I knew didn't think that it's so important (despite being people who are very well educated by conventional standards, top ~1% of elite colleges).

Tell me how exactly you're planning to use PCA day-to-day?

Disclaimer: I know that I'm not giving enough evidence to convince you: I've thought about this for thousands of hours (including working through many quantitative examples) and it's taking me a long time to figure out how to organize what I've learned.

I already have been using dimensionality reduction (qualitatively) in my day to day life, and I've found that it's greatly improved my interpersonal relationships because it's made it much easier to guess where people are coming from (before people's social behavior had seemed like a complicated blur because I saw so many variables without having started to correctly identify the latent ones).

i think the sooner you lose the need for everything to resonate deeply or have a concise insightful summary, the better.

You seem to be making overly strong assumptions with insufficient evidence: how would you know whether this was the case, never having met me? ;-)

Comment author: ChristianKl 26 June 2015 12:12:06PM 1 point [-]

The DSM is a mess but I think the problems isn't that there aren't people who understand PCA. There are political reasons inside of the American Psychiatric Association that led it to use definitions that aren't data driven.

It seem to me like to make major contributions to human knowledge you need to do a lot more than say: "Hey PCA is really great". You actually have to understand reasons of why people aren't using it and fixing those reasons.

100 is over a hundred years old, there have been a lot of people thinking that it should be used more. I think I have argued in the past in various time for PCR. The last time was when talking about the design of http://www.omnilibrium.com/ and how it should find factors for political labels via PCR instead of just using the left-right framework. I think I made the same argument for LW census political labels.

It's very suspicious that it maps onto a prexisting notion, and it's just not that predictive. I got lots of C's and D's in school, but worked 90 hours a week for 12 weeks on my speed dating project.

You say that it's not predictive for the preexisting notion. That doesn't mean that various things haven't been predicted with it. Big Five ratings have been predicted via analysing facebook posts.

Comment author: JonahSinick 26 June 2015 02:51:46PM *  0 points [-]

It seem to me like to make major contributions to human knowledge you need to do a lot more than say: "Hey PCA is really great". You actually have to understand reasons of why people aren't using it and fixing those reasons.

Have you read my speed dating project posts? I haven't yet written up the most important one on demographics (I can do that soon, just many conflicting priorities), but the one on individual variation in revealed preferences for attractiveness vs intelligence and sincerity starts to get at what I'm talking about.

My project gives a proof of concept for what I'm talking about in the context of social psychology. I've never seen such an application. So no, it's not just the realization that it could be applied, it's also giving a proof of concept: that's why it took ~1500 hours rather than ~10 hours.

As far as I can tell, the situation is simply that deep knowledge of the technique hasn't yet percolated into the social psychology community, and people who do have the relevant background knowledge haven't actually tried doing social psychology research. All you need is to notice something that's been missed. There are many such things (see Peter Thiel's discussion of how there are still secrets in his book "From Zero To One.")

If I recall correctly, Freeman Dyson has indicated that his demonstration of the equivalence of the two different formulations of quantum electrodynamics isn't as amazing as people believe, but was largely a function of him being one of the first people to learn both formulations! :-)

So I'd strongly encourage you to pursue your ideas more. I've been looking some at the General Social Survey data, where I haven't yet found something highly nontrivial (maybe I'm looking at the data the wrong way, or maybe it's just not a good dataset for this). I'd be happy to share my code with you / a cleaned form of the data, if you're interested in exploring factors for political labels.

Beyond Statistics 101

19 JonahSinick 26 June 2015 10:24AM

Is statistics beyond introductory statistics important for general reasoning?

Ideas such as regression to the mean, that correlation does not imply causation and base rate fallacy are very important for reasoning about the world in general. One gets these from a deep understanding of statistics 101, and the basics of the Bayesian statistical paradigm. Up until one year ago, I was under the impression that more advanced statistics is technical elaboration that doesn't offer major additional insights  into thinking about the world in general.

Nothing could be further from the truth: ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level. In hindsight my prior belief seems very naive – as far as I can tell, my only reason for holding it is that I hadn't heard anyone say otherwise. But I hadn't actually looked advanced statistics to see whether or not my impression was justified :D.

Since then, I've learned some advanced statistics and machine learning, and the ideas that I've learned have radically altered my worldview. The "official" prerequisites for this material are calculus, differential multivariable calculus, and linear algebra. But one doesn't actually need to have detailed knowledge of these to understand ideas from advanced statistics well enough to benefit from them. The problem is pedagogical: I need to figure out how how to communicate them in an accessible way.

Advanced statistics enables one to reach nonobvious conclusions

To give a bird's eye view of the perspective that I've arrived at, in practice, the ideas from "basic" statistics are generally useful primarily for disproving hypotheses. This pushes in the direction of a state of radical agnosticism: the idea that one can't really know anything for sure about lots of important questions. More advanced statistics enables one to become justifiably confident in nonobvious conclusions, often even in the absence of formal evidence coming from the standard scientific practice.

IQ research and PCA as a case study

In the early 20th century, the psychologist and statistician Charles Spearman discovered the the g-factor, which is what IQ tests are designed to measure. The g-factor is one of the most powerful constructs that's come out of psychology research. There are many factors that played a role in enabling Bill Gates ability to save perhaps millions of lives, but one of the most salient factors is his IQ being in the top ~1% of his class at Harvard. IQ research helped the Gates Foundation to recognize iodine supplementation as a nutritional intervention that would improve socioeconomic prospects for children in the developing world.

The work of Spearman and his successors on IQ constitute one of the pinnacles of achievement in the social sciences. But while Spearman's discovery of IQ was a great discovery, it wasn't his greatest discovery. His greatest discovery was a discovery about how to do social science research. He pioneered the use of factor analysis, a close relative of principal component analysis (PCA).

The philosophy of dimensionality reduction

PCA is a dimensionality reduction method. Real world data often has the surprising property of "dimensionality reduction":  a small number of latent variables explain a large fraction of the variance in data.

This is related to the effectiveness of Occam's razor: it turns out to be possible to describe a surprisingly large amount of what we see around us in terms of a small number of variables. Only, the variables that explain a lot usually aren't the variables that are immediately visibleinstead they're hidden from us, and in order to model reality, we need to discover them, which is the function that PCA serves. The small number of variables that drive a large fraction of variance in data can be thought of as a sort of "backbone" of the data. That enables one to understand the data at a "macro /  big picture / structural" level.

This is a very long story that will take a long time to flesh out, and doing so is one of my main goals. 

Comment author: Nornagest 26 June 2015 06:46:47AM *  0 points [-]

Consider the construct of conscientiousness. It's very suspicious that it maps onto a prexisting notion...

Is it? We've been modeling each other as long as language has existed. Conscientiousness might not correspond to a single well-defined causal system in the brain, but it would be no surprise to me at all to find common words in most languages for close empirical clusters in personality-space. And the Big 5 factors are very much empirical constructs, not causal.

Comment author: JonahSinick 26 June 2015 07:04:40AM 0 points [-]

Ok, I guess what I mean is that it's suspicious that it maps onto a preexisting notion held by the general population, in the same way that it would be suspicious for psychology research to apparently show the existence of demon possession (which humans have in fact believed in). I wouldn't find it suspicious if it mapped onto a notion of someone with demonstrated exceptional ability to read and connect with people (e.g. Bill Clinton).

The way scientific progress occurs is by developing progressively more refined understandings of what's going on: for example, passing from the Ptolemaic model of the stars and planets to the Copernican model to the Newtonian model to Einstein's theory of general relativity. One can't hope to understand reality if one isn't flexible enough to recognize that things might be very different from how they initially appear.

View more: Prev | Next