Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: query 15 December 2015 08:46:11PM 1 point [-]

Beautifully written; thank you for sharing this.

Comment author: Gleb_Tsipursky 23 November 2015 12:42:29AM *  7 points [-]

use every deviation from perfection as ammunition against even fully correct forms of good ideas.

As a professional educator and communicator, I have a deep visceral experience with how "fully correct forms of good ideas" are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.

This is why it's not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.

To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the "fully correct forms of good ideas."

This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it's worth their time and energy.

This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here

I can't find any discussion in the linked article about why research is a key way of validating truth claims

The article doesn't discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:

Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood!

This discussion of a study as validating the truth claim proposition of "improving mood=higher willpower" demonstrates - not tells but shows - the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.

Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I'll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.

what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends

I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.

I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?

EDIT: added link to my other comment

Comment author: query 23 November 2015 02:26:15PM *  2 points [-]

EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.

Comment author: Gleb_Tsipursky 20 November 2015 04:56:52AM 11 points [-]

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional - euphemisms that do not associate rationality as such with what we're doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. This writing style is much more natural for me. So is this.

However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it's necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.

This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:

Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.

Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don't fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Comment author: query 21 November 2015 05:43:26AM 1 point [-]

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Effectively no. I understand that you're aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you've just said aren't different in gestalt from what I've read from you.

To be potentially more helpful, here's a few ways the arguments you just made fall flat for me:

I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

Connectivity to the rationalist movement or "rationality" keyword isn't necessary to immunize people against the ideas. You're right that if you literally never use the word "bias" then it's unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word "bias", but if they respond the same way to the phrase "thinking errors" or realize at some point that's the concept I'm talking about, it's the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.

For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I can't find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don't know if I understand what you're trying to say; to reflect back, are you saying something like "People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value." ? I ... don't know about that, but I won't critique that position here since I may not be understanding.

(...) Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it's not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.

You're clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.

Comment author: query 19 November 2015 08:14:38PM *  19 points [-]

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
  • By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
  • Gleb's writing style strikes me as very unauthentic feeling. Let me be clear I don't mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating "rationality" with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.

An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying "Oh god don't give me that '6 weird tips' bullshit about 'rational thinking', and spare me your godawful rhetoric, gtfo."

Like I said at the start, I don't know which way it swings, but those are my thoughts and concerns. I imagine they're not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I'm not sure of a good way to collect evidence about this besides running absurdly large long-term "consumer" studies.

I do imagine he plans to continue his efforts, and thus we'll find out eventually how this turns out.

Comment author: query 13 November 2015 03:28:45PM 4 points [-]

I disagree with your conclusion. Specifically, I disagree that

This is, literally, infinitely more parsimonious than the many worlds theory

You're reasoning isn't tight enough to have confidence answering questions like these. Specifically,

  • What do you mean by "simpler"?
  • Specifically how does physics "take into account the entire state of the universe"?

In order to actually say anything like the second that's consistent with observations, I expect your physical laws become much less simple (re: Bell's theorem implying non-locality, maybe, see Scott Aaronson's blog.)

A basic error you're making is equating simplicity of physical laws with small ontology. For instance, Google just told me there's ~10^80 atoms in the observable universe (+- a few orders of magnitude), but this is no blow against the atomic theory of matter. You can formalize this interplay via "minimum message length" for a finite, fully described system; check Wikipedia for details.

Even though MWI implies a large ontology, it's just a certain naive interpretation of our current local description of quantum mechanics. It's hard to see how there could be a global description that is simpler, though I'd be interested to see one. (Here local/global mean "dependent on things nearby" vs "dependent on things far away", which of course is assuming that ontology.)

With all kindness, the strength of your conclusion is far out of scope with the argument you've made. The linked paper looks like nonsense to me. I would recommend studying some basic textbook math and physics if you're truly interested in this subject, although be prepared for a long and humbling journey.

Comment author: Lumifer 10 August 2015 03:48:49PM 12 points [-]

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

The remainder of the post actually argues that persistent, stable "reflexes" are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.

Comment author: query 10 August 2015 08:50:55PM 3 points [-]

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.

That said, we cant actually retroactively edit anyways.

Comment author: ChristianKl 20 July 2015 11:49:28AM *  0 points [-]

Being to vague to be wrong is bad. Especially when you want to speak in favor of science.

I don't see any mention of formalism in the OP. There no reason to say "well maybe the author meant to say X" when he didn't say X.

Comment author: query 20 July 2015 07:08:40PM 1 point [-]

Being to vague to be wrong is bad. Especially when you want to speak in favor of science.

I agree, it's good to pump against entropy with things that could be "Go Science!" cheers. I think the author's topic is not too vague to discuss, but his argument isn't strong or specific enough that you should leap to action based solely on it. I think it's a fine thing to post to Discussion though; maybe this indicate we have ideal different standards for Discussion posts?

There no reason to say "well maybe the author meant to say X" when he didn't say X.

Sure there is! Principle of charity, interpreting what they said in different language to motivate further discussion, rephrasing for your own understanding (and opening yourself to being corrected). Sometimes someone waves their hands in a direction, and you say "Aha, you mean..."

Above the author says "I think query worded it better", which is the sort of thing I was aiming to accomplish.

Comment author: g_pepper 19 July 2015 03:35:11PM 3 points [-]

Good post.

A couple of nitpicks...

Similarly, a lot of modern medicine is rational, but not too scientific. A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is. They may run a test to verify their guess.

Actually, this illustrates scientific thinking; the doctor forms a hypothesis based on observation and then experimentally tests that hypothesis.

Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

Actually, the natural sciences (physics in particular) are heavily dependent on calculus. Ditto for engineering. In fact, a solid understanding of Bayesian statistics requires a grounding in calculus. So, I don't think it is true that statistics and data science are "much more useful" than calculus.

Comment author: query 19 July 2015 05:50:32PM 0 points [-]

Actually, this illustrates scientific thinking; the doctor forms a hypothesis based on observation and then experimentally tests that hypothesis.

Most interactions in the world are of the form "I have an idea of what will happen, so I do X, and later I get some evidence about how correct I was". So, taking that as a binary categorization of scientific thinking is not so interesting, though I endorse promoting reflection on the fact that this is what is happening.

I think the author intends to point out some of the degrees of scientiificism by which things vary: how formal is the hypothesis, how formal is the evidence gathering, are analytical techniques being applied, etc. Normal interactions with doctors are low on scientificism in this sense, though they are heavily utilizing the output of previous scientificism to generate a judgement.

Comment author: query 18 July 2015 10:33:12PM 6 points [-]

I think it would be good to separate the analysis into FGCA's which are always fallacious, versus those that are only warning signs/rude. For instance, the fallacy of grey is indeed a fallacy, so using it as a counter-argument is a wrong move regardless of its generality.

However, it may in fact be that your opponent is a very clever arguer or that the evidence they present you has been highly filtered. Conversationally, using these as a counter-argument is considered rude (and rightly so), and the temptation to use them is often a good internal warning sign; however you don't want to drop consideration of them from your mental calculus. For instance, perhaps you should be motivated after the conversation to investigate alternative evidence if you're suspicious that the evidence presented to you was highly filtered.

Comment author: [deleted] 06 June 2015 10:15:42PM 1 point [-]

I don't think it's an active waste of time to explore the research that can be done with things like AIXI models. I do, however, think that, for instance, flaws of AIXI-like models should be taken as flaws of AIXI-like models, rather than generalized to all possible AI designs.

So for example, some people (on this site and elsewhere) have said we shouldn't presume that a real AGI or real FAI will necessarily use VNM utility theory to make decisions. For various reasons, I think that exploring that idea-space is a good idea, in that relaxing the VNM utility and rationality assumptions can both take us closer to how real, actually-existing minds work, and to how we normatively want an artificial agent to behave.

Comment author: query 06 June 2015 10:56:32PM 0 points [-]

Modulo nitpicking, agreed on both points.

View more: Next