What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

Trying to teach someone to think rationally is a long process -- maybe even impossible for some people. It's about explaining many biases that people do naturally, demonstrating the futility of "mysterious answers" on gut level; while the student needs the desire to become stronger, the humility of admitting "I don't know" together with the courage to give a probabilistic answer anyway; resisting the temptation to use the new skills to cleverly shoot themselves in the foot, keeping the focus on the "nameless virtue" instead of signalling (even towards the fellow rationalists). It is a LW lesson that being a half-rationalist can hurt you, and being a 3/4-rationalist can fuck you up horribly. And the online clickbait articles seem like one of the worst choices for a medium to teach rationality. (The only worse choice that comes to my mind would be Twitter.)

On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world. Not to be replaced by something more lesswrongish, but simply by anything else that usually appears in the given medium. Would pressing that button make the world a more sane place? What would have happened if someone had pressed that button hundred years ago? In other words, I'm trying to avoid the "nirvana fallacy" -- I am not asking whether those articles are the perfect vehicle for x-rationality, but rather, whether they are a net benefit or a net harm. Because if they are a net benefit, then it's better having them, isn't it?

Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire. Is it really that wrong? Aren't people already having absurdly wrong ideas about rationality? Remember all the "straw Vulcans" produced by the movie industry; Terminator, The Big Bang Theory... Rationality already is associated with being a sociopathic villain, or a pathetic nerd. This is where we are now; and the "rationality" clickbait, however sketchy, cannot make it worse. Actually, it can make a few people interested to learn more. At least, it can show people that there is more than one possible meaning of the word.

To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup. But if we look at the larger picture, that specific outgroup (people who procrastinate by reading clickbaity self-improvement articles) actually aren't that different from us. They may actually be our nearest neighbors in the human intellectual space. So what some of us (including myself) feel here is the uncanny valley. Looking at someone so similar to ourselves, and yet so dramatically different in some small details which matter to us strongly, that it feels creepy.

Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention. So why not accept the imperfection of the world, and just do what we can.

As a sidenote, I don't believe we are at risk of having an "Eternal September" on LessWrong (more than we already have). More people interested in rationality (or "rationality") will also mean more places to debate it; not everyone will come here. People have their own blogs, social network accounts, et cetera. If rationality becomes the cool thing, they will prefer to debate it with their friends.

EDIT: See this comment for Gleb's description of his goals.

New Comment
221 comments, sorted by Click to highlight new comments since: Today at 1:55 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
  • By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become
... (read more)

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional - euphemisms that do not associate rationality as such with what we're doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth c... (read more)

5John_Maxwell8y
One idea is to try to teach your audience about overconfidence first, e.g. the way this game does with the calibration questions up front. See also.
7Gleb_Tsipursky8y
Nice idea! Thanks for the suggestion. Maybe also a Caplan Test.
2PipFoweraker8y
I'll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset. I would note that the otherwise-awesome Adventures in Cognitive Biases' calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I've introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.
2Gleb_Tsipursky8y
Thanks!
2query8y
Effectively no. I understand that you're aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you've just said aren't different in gestalt from what I've read from you. To be potentially more helpful, here's a few ways the arguments you just made fall flat for me: Connectivity to the rationalist movement or "rationality" keyword isn't necessary to immunize people against the ideas. You're right that if you literally never use the word "bias" then it's unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word "bias", but if they respond the same way to the phrase "thinking errors" or realize at some point that's the concept I'm talking about, it's the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas. I can't find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don't know if I understand what you're trying to say; to reflect back, are you saying something like "People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value." ? I ... don't know about that, but I won't critique that position here since I may not be understanding. You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their deci

use every deviation from perfection as ammunition against even fully correct forms of good ideas.

As a professional educator and communicator, I have a deep visceral experience with how "fully correct forms of good ideas" are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.

This is why it's not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.

To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the "fully correct forms... (read more)

3query8y
EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.
5Tem428y
I would argue that your first and third points are not very strong. I think that it is not useful to protect an idea so that it is only presented in its 'cool' form. A lot of harm is done by people presenting good ideas badly, and we don't want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood. People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle. I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer's words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources. I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.
6Gleb_Tsipursky8y
Agreed with presenting them to intro-level means, so that there is less of an inference gap. Good idea on subtly introducing readers to more and better sites for further and better reading, updating on this to do so more often in my articles. Thanks!
4Evan_Gaensbauer8y
This comment captures my intutions well. Thanks for writing this. It's weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb's work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that's easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that's a good or ill effect is a judgment I'll leave for you. On the other hand, when I put on my rationality community hat, I feel the same way about Gleb's work as you do. It's uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.
7Gleb_Tsipursky8y
An important way I think about my work in the rationality sphere is cognitive altruism. In a way, it's not different than effective altruism. When promoting effective giving, I encourage people to think rationally about their giving. I pose to them the question of how (and whether) they currently think about their goals in giving, the impact of their giving, and the quality of the charities to which they give, encouraging them to use research-based evaluations from GiveWell, TLYCS, etc. The result is that they give to effective charities. Similarly, I encourage people to think rationally about their life and goals in my promotion of rationality. The results is that they make better decisions about their lives and are more capable of meeting their goals, including being more long-term oriented and thus fighting the Moloch problem. For example, here is what one person got out of my book on finding meaning and purpose by orienting toward one's long-term goals. He is now dedicated to focusing his life on helping other people have a good life, in effect orienting toward altruism. In both cases, I take the rational approach of using methods from content marketing that have been shown to work effectively in meeting the goals of spreading complex information to broad audiences. It's not different in principle. I'm curious whether this information helps you update one way or another in your assessment of Intentional Insights.
2[anonymous]8y
My immediate reaction was to disagree. I think most people don't listen to arguments from authority often enough; not too often. So I decided to search "arguments from authority" on LessWrong, and the first thing I came to was this article by Anna Salamon: She then suggests separating out knowledge you have personally verified from arguments from authority knowledge to avoid groupthink, but this doesn't seem to me to be a viable method for the majority of people. I'm not sure it matters if non-experts engage in groupthink if they're following the views of experts who don't engage in groupthink. Skimming the comments, I find that the response to AnnaSalamon's article was very positive, but the response to your opposite argument in this instance also seems to be very positive. In particular, AnnaSalamon argues that the share of knowledge which most people can or should personally verify is tiny relative to what they should learn. I agree with her view. While I recognize that there are different people responding to AnnaSalamon's comments than the one's responding to your comments, I fear that this may be a case of many members of LessWrong interpreting arguments based on presentation or circumstance rather than on their individual merits.

My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.

7Raelifin8y
I just wanted to interject a comment here as someone who is friends with Gleb in meatspace (we're both organizers of the local meetup). In my experience Gleb is kinda spooky in the way he actually updates his behavior and thoughts in response to information. Like, if he is genuinely convinced that the person who is criticizing him is doing so out of a desire to help make the world a more-sane place (a desire he shares) then he'll treat them like a friend instead of a foe. If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him. I'm probably biased in that he's my friend. He certainly struggles with it sometimes, and fails too. Critical scrutiny is important, and I'm really glad that Viliam made this thread, but it kinda breaks my heart that this spirit of actually taking ideas seriously has led to Gleb getting as much hate as it has. If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier. (And yes, Gleb, I know that we're not optimizing for warm-fuzzies. It still sucks sometimes.) Anyway, I guess I just wanted to put in my two (biased) cents that Gleb's a really cool guy, and any appearance of a status-hungry manipulator is just because he's being agent-y towards good ends and willing to get his hands dirty along the way.
4Gleb_Tsipursky8y
Yeah, we're not optimizing for warm-fuzzies from Less Wrongers, but for a broad impact. Thanks for the sympathetic words, my friend. This road of effective cognitive altruism is a hard one to travel, neither being really appreciated, at least at first, by the ones who we are trying to reach, nor by the ones among our peers whose ideas we are bringing to the masses. Well, if my liver gets consumed daily by vultures, this is the road I've chosen. Glad to have you by my side, and hope this doesn't rebound on you much.
-7Lumifer8y
0ChristianKl8y
I'm not sure whether that's a good idea. Writing that feels weird to the author is also going to transmit that vibe to the audience. We don't want rationality to be associated with feeling weird and unpleasant.
1Lumifer8y
/thinks about empty train tracks and open barn doors... :-/
-1Lumifer8y
I can't speak for other people, of course, but he never looked much like a manipulator. He looks like a guy who has no clue. He doesn't understand marketing (or propaganda), the fine-tuned practice of manipulating people's minds for fun and profit. He decided he needs to go downmarket to save the souls drowning in ignorance, but all he succeeded in doing -- and it's actually quite impressive, I don't think I'm capable of it -- is learning to write texts which cause visceral disgust. Notice the terms in which people speak of his attempts. It's not "has a lot of rough edges", it's slime and spiders in human skin and "painful" and all that. Gleb's writing does reach System I, but the effect has the wrong sign.
2Raelifin8y
Ah, perhaps I misunderstood the negative perception. It sounds like you see him as incompetent, and since he's working with a subject that you care about that registers as disgusting? I can understand cringing at the content. Some of it registers that way to me, too. I think Gleb's admitted that he's still working to improve. I won't bother copy-pasting the argument that's been made elsewhere on the thread that the target audience has different tastes. It may be the case that InIn's content is garbage. I guess I just wanted to step in and second jsteinhardt's comment that Gleb is a very growth-oriented and positive, regardless of whether his writing is good enough.
-3Lumifer8y
Not only that -- let me again stress the point that his texts cause the "Ewwww" reaction, not "Oh, this is dumb". The slime-and-snake-oil feeling would still be there even if he were writing in the same way about, say, the ballet in China. As to "positive", IlyaShpitser mentioned chutzpah which I think is a better description :-/
-3OrphanWilde8y
Yes. That's what the status quo is, and how it works. More, there's are multiple levels to the reasons for its existence, and the implicit suggestion that sticking to the status quo would be a tragedy neglect those reasons in favor of romantic notions of fixing the world.
1Tem428y
Call me a helpless romantic, but LessWrong is supposed to have a better status quo.
7Gleb_Tsipursky8y
Yeah, totally agreed. The point of LessWrong, to me at least, is to improve the status quo, and keep improving.
1OrphanWilde8y
Deep wisdom.
6Gleb_Tsipursky8y
Thank you, I really appreciate it! I try to stay positive and seek optimizing opportunities :-)

In writing this I considered the virtue of silence, and decided to voice something explicitly.

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Before today I hadn't read deeply into the articles published by Gleb. Owing to this comment:

http://lesswrong.com/lw/mze/marketing_rationality/cwki

and

http://lesswrong.com/lw/mz4/link_lifehack_article_promoting_lesswrong/cw8n

I explicitly just read a handful of Gleb's articles. Prior to this I have just avoided getting in his way (virtue of silence - avoid reading means avoiding being critical and avoid judging someone who is trying to make progress)

These here (to be clear):

... (read more)

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Why?

Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein's theory more acceptable later; no one would criticize him for being different from Newton.

Okay, I don't really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!

Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because... well, because level 3 is better than level 2, obviously. And if those people perhaps can't make the jump, I guess they simply were not meant to be helped.

This is why I wrote about "the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons". We are (or imagine ourselves to be) at level 3, and all levels below us ar... (read more)

Let's start with a false statement from one of Gleb's articles:

Intuitively, we feel our mind to be a cohesive whole, and perceive ourselves as intentional and rational thinkers. Yet cognitive science research shows that in reality, the intentional part of our mind is like a little rider on top of a huge elephant of emotions and intuitions. This is why researchers frequently divide our mental processes into two different systems of dealing with information, the intentional system and the autopilot system.

What's false? Researchers don't use the terms "intentional system" and "autopilot system".

Why is that the problem? Aren't the terms near enough to system I and system II? A person who's interested might want to read additional literature on the subject. The fact that the terms Gleb invented don't match with the existing literature means that it's harder for a person to go from reading Gleb articles to reading higher level material.

If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the "intentional system" and "autopilot system" and find that th... (read more)

3hairyfigment8y
I agree with much of this, but that quote isn't a false claim. It does not (quite) say that researchers use the terms "intentional system" and "autopilot system", which seem like sensible English descriptions if for some bizarre reason you can't use the shorter names. Now, I don't know why anyone would avoid the scholarly names when for once those make sense - but I've also never tried to write an article for Lifehack. What is your credence for the explanation you give, considering that eg the audience may remember reading about many poorly-supported systems with levels numbered I and II - seeing a difference between that and the recognition that humans evolved may be easier for some then evaluating journal citations.
4ChristianKl8y
The motivation of Kahnmann to use system I and system II isn't to have shorter names. It's that there are existing conceptions among people about words describing mental concepts and he doesn't want to use them. Wikipedia list from Kahnmann: Emotional/logical is a different distinction then intentional/autopilot. Trained people can shut on and off emotions via their intentions and the process has little to do with being logical or calculating. But even given them new names that scientists don't give them might be a valid move. If you how even do that then you should be open about the fact that you invented new names. Given science public nature I also think that you should be open about why you choose certain terms and choosing new terms should come with an explanation of why you prefer them over alternatives. The reason shouldn't be that your organisation is named "intentional insights" and that's why you call it the "intentional system". Again that pattern leads to the rationality is about using system II instead of system I position with differ from the CFAR position. In Gleb's own summary of Thinking Fast and slow he writes: Given that in Kahnmann's framework intentions are generated by system I, calling system II the "intentional system" produces problems. Explanations don't have credence, predictions do. If you specify a prediction I can give you my credence for it.
3gjm8y
It might be worth correcting "Greb" and "Greg" to "Gleb" in that, to forestall confusion.
0ChristianKl8y
Thanks.
5Vaniver8y
Did you ever read about Feynman's experience reading science textbooks for elementary school? (It's available online here.) There are good and bad ways to simplify. Sure, there are people I'd rather not join the LessWrong community for status reasons. But I don't think the resistance here is about status instead of methodology. Yes, it would be nice to have organizations devoted to helping people get from level 1 to level 2, but if you were closing your eyes and designing such an organization, would it look like this?
3OrphanWilde8y
(Both agreeing with and refining your position, and directed less to you than the audience): Personally, I'm at level 21, and I'm trying to raise the rest of you to my level. Now, before you take that as a serious statement, ask yourself how you feel about that proposition, and how inclined you would be to take anything I said seriously if I actually believed that. Think about to what extent I behave like I -do- believe that, and how that changes the way what I say is perceived. http://lesswrong.com/lw/m70/visions_and_mirages_the_sunk_cost_dilemma/ <- This post, and pretty much all of my comments, had reasonably high upvotes before I revealed what I was up to. Now, I'm not going to say it didn't deserve to get downvoted - I learned a lot from that post that I should have known going into it - but I'd like to point out the fundamental similarities, but scaled up a level, between what I do there, and typical rationalist "education". "Here's a thing. It was a trick! Look at how easily I tricked you! You should now listen to what I say about how to avoid getting tricked in the future." Worse, cognitive dissonance will make it harder to fix that weakness in the future. As I said, I learned a -lot- in that post; I tried to shove at least four levels of plots and education into it, and instead, turned people off with the first or second one. I hope I taught people something, but in retrospect, and far removed from it, I think it was probably a complete and total failure which mostly served to alienate people from the lessons I was attempted to impart. The first step to making stupid people slightly less stupid is to make them realize the way in which they're stupid in the first place, so that they become willing to fix it. But you can't do that, because, obviously, people really dislike being told they're stupid. Because there are some issues inherent in approaching other people with the assumption that they're less than you, and that they should accept your help in ra
5Gleb_Tsipursky8y
This is a pretty confusing point. I have plenty of articles where I admit my failures and discuss how I learned to succeed. Secondly, I have only started publishing on Lifehacker - published 3 so far - and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well. BTW, curious if any of these discussions have caused you to update on any of your claims to any extent?
4OrphanWilde8y
I now assign negligible odds to the possibility that you're a sociopath (used as a shorthand for any of a number of hostile personality disorders) masquerading as a normal person masquerading as a sociopath, and somewhat lower odds on you being a sociopath outright, with the majority of assigned probability concentrating on "normal person masquerading as sociopath" now. (Whether that's how you would describe what you do or not, that's how I would describe it, because the way you write lights up my "Predator" alarm board like a nearby nuke lights up a "Check Engine" light.) Demonstrable evidence that you do so better than average isn't the same as demonstrable evidence that you do so well.
5Gleb_Tsipursky8y
Thanks for sharing about your updating! I am indeed a normal person, and have to put a lot of effort into this style of writing for the sake of what I perceive as a beneficial outcome. I personally have updated away from you trolling me and see you as more engaged in a genuine debate and discussion. I see we have vastly different views on the methods of getting there, but we do seem to have broadly shared goals. Fair enough on different interpretations of the word "well." As I said, my articles have done twice as well as the average for Lifehack articles, so we can both agree that it is demonstrable evidence of a significant and above-average level of competency on an area where I am just starting - 3 articles so far - although the term "well" is more fuzzy.
5Vaniver8y
Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A. The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it's right there in the name that one is trying to asymptotically remove wrongness. The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn't give people gold stars that say "yep, you got the humility part down," and unsurprisingly people are not as good at determining that themselves as they'd like to be.
2OrphanWilde8y
Then perhaps you've framed the problem you're trying to solve in this thread wrong. [ETA: Whoops. Thought I was talking to Villiam. This makes less-than-sense directed to you.] I don't think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again. Eliezer learned humility through making mistakes, mistakes he learned from; the practice of teaching rationality is the practice of having students skip those mistakes. He shouldn't, even if he could.
6Vaniver8y
Oh, I definitely agree with you that trying to teach rationality to others to fix them, instead of providing a resource for interested people to learn rationality, is deeply mistaken. Where I disagree with you is the (implicit?) claim that the Sequences were written to teach instead of being a resource for learning. Mmm. I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don't have to be. I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they're noticed, rather than when it's no longer possible to maintain them.
4OrphanWilde8y
Ah! My apologies. Thought I was talking to Villiam. My responses may have made less than perfect sense. You can learn from mistakes, but you don't learn what it feels like to make mistakes (which is to say, exactly the same as making the right decision). That's where humility is important, and where the experience of having made mistakes helps. Making mistakes doesn't feel any different from not making mistakes. There's a sense that I wouldn't make that mistake, once warned about it - and thinking you won't make a mistake is itself a mistake, quite obviously. Less obviously, thinking you will make mistakes, but that you'll necessarily notice them, is also a mistake.
1Lumifer8y
The solution to the meta-level confusion (it's turtles all the way down, anyway) is to spend a few years building up an immunity to iocane powder.
8Gleb_Tsipursky8y
I address the concerns about the writing style and content in my just-written comment here. Let me know your thoughts about whether that helps address your concerns. Regarding clickbait and sharing, let's actually evaluate the baseline. I want to highlight that 2K is quite a bit higher than the average for a Lifehack article. A typical article does not rise above 1K, and that's considered pretty good. So my articles have done really well by comparison to other Lifehack articles. Since that's the baseline, I'm pretty happy with where the sharing is. Why would you be disheartened if I stopped what I was trying to do? EDIT: Also forgot to add that some of the articles you listed were not written by me but by another aspiring rationalist, so FYI.
3Elo8y
no that does not answer to the issues I raised. I am now going to take apart this article: www.intentionalinsights.org/7-surprising-science-based-hacks-to-build-your-willpower 7 Surprising Science-Based Hacks To Build Your Willpower Tempted by that second doughnut? Struggling to resist checking your phone? Shopping impulsively on Amazon? Slacking off by reading BuzzFeed instead of doing work? What you need is more willpower! Recent research shows that strengthening willpower is the real secret to the kind of self-control that can help you resist temptations and achieve your goals. The great news is that scientists say strengthening your willpower is not as hard as you might think. Here are 7 research-based hacks to strengthen your willpower! 1. Smile :-) Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood! Smile or laugh, watch a funny video or two. 1. Clench Your Fist Clench your fists or partake in another type of activity where you exercise self-control. Studies say that exercising self-control in any physical domain causes you to become more disciplined in other facets of life. So do whatever works for you to exercise self-control when you are trying to fight temptations: clench your fist, squeeze your eyes shut, or you can even hold in your pee, just like UK Prime Minister David Cameron. 1. Meditate Photo Credit: Gleb Tsipursky meditating in the park Meditation is great for a lot of things – reducing stress, increasing focus, managing emotions. Now re
8Kaj_Sotala8y
On what basis? It matches my experience, something similar has been discussed on LW before, and it would seem to match various theoretical considerations about human psychology. This seems like a very strong and mostly unjustified claim. E.g. even something like the Getting Things Done system, which works very well for lots and lots of people and has been covered and promoted in numerous places, is still something that's relatively unknown outside the kinds of circles where people are interested in this kind of thing. A lot of people in the industrialized world could benefit from it, but most haven't even tried it. Ideas spread slowly, and often at a rate that's only weakly correlated with their usefulness.
5ChristianKl8y
Given that you didn't address the following, let me address it To be that raises a harmful untruth flag. Roy Baumeister suggests that meals help with willpower through glucose. To me the claim that it's protein that builds willpower looks unsubstantiated. It certainly not backed up by the Israeli judges. Where does the harm come into play? I understand the nutritional consensus to be that most people eat meals with too much protein. Nutrition science is often wrong, but that means that one should be careful about giving people the advice of raising the protein content of their meals.
0entirelyuseless8y
The nutritional consensus is also not about optimizing willpower. I would be somewhat skeptical of the claim that the willpower optimizing meal just luckily happens to be identical to the health optimizing meal.
1ChristianKl8y
I haven't made that claim.
2entirelyuseless8y
In the sense that you didn't make it, neither did I say that you did.
4ChristianKl8y
My argument is about two issues: 1) There no reason to belief that protein increases willpower. 2) If you tell people a lie to make them improve their diet it's at least defensible they end of healthier as a result. If your lie however makes them eat a less healthy diet you really screwed up. Apart from that, I don't believe that eating glucose directly to increase your willpower is a good idea or healthy.
3Gleb_Tsipursky8y
Why not helpful? I speak in the tone of listicle articles reluctantly, as I wrote above. It's icky, but necessary to get this past the editors at Lifehack and elsewhere. Actually, it is linked to. You can check out the article for the link, but here is the link itself if you're curious: www.albany.edu/~muraven/publications/promotion files/articles/tice et al, 2007.pdf This I just don't get. If experiments say you should watch a funny video, and they do as the link above states, why is this not-even wrong territory?

Thank you for bringing this up as a topic of discussion! I'm really interested to see what the Less Wrong community has to say about this.

Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn't happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.

I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.

The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: "6 Science-Based Hacks for Growing Mentally Stronger." BTW, editors are usually the ones who write the headline, so I can't &quo... (read more)

6MrMind8y
I'm curious: do you use a unified software for tracking the impact of articles through the chain?
9Gleb_Tsipursky8y
For how many times the article itself was shared, Lifehack has that prominently displayed on their website. Then, we use Google Analytics, which gives us information on how many people visited out website from Lifehack itself. We can't track them further than that. If you have ideas about how to track them further, especially using free software, I'd be interested in learning about that!
2OrphanWilde8y
Ahem: It's quite rude to downvote Vaniver even as you respond to him. Especially -twice-.
4Gleb_Tsipursky8y
I thought his comments were not worth attention, following the general guidelines here.
5Vaniver8y
Are you encouraging or discouraging me to elaborate?
3Gleb_Tsipursky8y
I thought your original comments were not helpful for readers to gain useful information. I am encouraging you to elaborate and hope you will give a clear explanation of your position when you post.

I was insufficiently clear: that was a question about your model of my motivation, not what you want my motivation to be. You can say you want to hear more, but if you act against people saying things, which do you expect to have more impact?

But in the spirit of kindness I will write a longer response.


This subject is difficult to talk about because your support here is tepid and reluctant at best, and your detractors are polite.

Now, you might look at OrphanWilde or Clarity and say "you call that polite?"--no, I don't. Those are the only people willing to break politeness and voice their lack of approval in detail. This anecdote about people talking in the quiet car comes to mind; lots of people look at something and realize "this is a problem" but only a few decide it's worth the cost to speak up about it. Disproportionately, those are going to be people who feel the cost less strongly.

There's a related common knowledge point--I might think this is likely net negative, but I don't know how many other people think this is a likely net negative. Only if I know that lots of people think this is a likely net negative, and that they are also aware that this is the ... (read more)

4Gleb_Tsipursky8y
Thank you for actually engaging with the content. The same effect works if people think this is a net positive. Furthermore, Less Wrong is a quite critical community, with people much more likely to provide criticism than support, as the latter wins less social status points. This is not to cast aspersions on the community at all - there's a reason I participate actively. I like being challenged and updating my beliefs. But let's be honest, this is a community of challenge and debate, not warm fuzzies and kumbayah. Now let's get to the meat of the matter. I agree that it would not be nice if more of the broader population came to LW, the inferential gap would be way too big, and Endless September sucks. I discuss more in my comment here how that is not the goal I am pursuing, together with other InIn participants. The goal is to simply convey more clear thinking techniques effectively to the broad audience and raise the sanity waterline. For a select few, as that comment describes, they can go up to LW, likely those with a significantly high IQ but lack of sufficient education about how their mind works. I am confused by this comment. If I didn't understand my audience, how come my articles are so successful with them? Believe me, I have extensively researched the audiences there, and how to engage them well. You fail at my mind if you think my writing would be only engaging to college professors. And please consider who you are talking to when you discuss writing advice. I have read many books about writing, and taught writing as part of my college teaching. As proof, here is evidence. I have only started publishing on Lifehacker - published 3 so far - and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my aud
2Vaniver8y
You're welcome! Thank you for continuing to be polite. I was already aware of how many times your articles have been shared. I would not base my judgment of a painter's skill with the brush on how many books on painting they had read.
4Gleb_Tsipursky8y
I guess the metaphor I would take for the painter is how many of her paintings have sold. That's the appropriate metaphor for how many times the articles were shared. If the painter's goal is to sell paintings with specific content - as it is my goal to have articles shared with specific content not typically read by an ordinary person - then sharing of articles widely indicates success.
3Lumifer8y
+1
3Gleb_Tsipursky8y
I thought upvotes were used for that purpose.
0MalcolmOcean8y
By design, upvotes don't show public approval. Commenting +1 does.
4Gleb_Tsipursky8y
Ah, good point
-5Lumifer8y
-5OrphanWilde8y

I'll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By "kind of" I mean "have no idea what you're talking about but are smarter than marketers and it can't be nearly that complex so you're going to talk about it anyways".

Clickbait has come up a few times. The problem is that that isn't marketing, at least not in the sense that people here seem to think. If you're all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.

GEICO has good marketing, which doesn't sell you on their product at all. Indeed, the most prominent "marketing" element of their marketing - the "Saves you 15% or more" bit - mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don't get noticed as marketing, indeed don't get noticed at all.

The issue with this entire conversation is that everybody seems to think marketing is noti... (read more)

Not sure if it makes any difference, but instead of "stupid people" I think about people reading articles about 'life hacking' as "people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice"; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.

If I'd believe the readers were literally stupid, then of course I wouldn't see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people... uhm... like I used to be before I found LW.

Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: "Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematica... (read more)

8Gleb_Tsipursky8y
Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note. The question is what will people do: will they actually follow the links to get more deep engagement? Let's take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged. The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.
2Lumifer8y
Not that I belong to his target demographic, but his articles would make me cringe and rapidly run in the other direction.
8Gleb_Tsipursky8y
They are intended to not appeal to you, and that's the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.
-1Lumifer8y
I don't cringe at the level. I cringe at the slimy feel and the strong smell of snake oil.
6Tem428y
It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me. Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
4Gleb_Tsipursky8y
To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself. Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.
-3Lumifer8y
The overwhelming stench trips them. This stuff can't be edited to make it better, it can only be dumped and completely rewritten from scratch. Fisking it is useless.
4Gleb_Tsipursky8y
Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.
4Lumifer8y
So, why do you think this is necessary? Do you believe that proles have an unyielding "tits or GTFO" mindset so you have to provide tits in order to be heard? That ideas won't go down their throat unless liberally coated in slime? It may look to you like you're raising the waterline, but from the outside it looks like all you're doing is contributing to the shit tsunami. I think "revulsion" is a better word. Wasn't there a Russian intellectual fad, around the end of XIX century, about "going to the people" and "becoming of the people" and "teaching the people"? I don't think it ended well. How do you know? What do you measure that tells you you are actually raising the sanity waterline?
3Gleb_Tsipursky8y
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that's less shitty than what people are used to consuming, and then slowly build them up. That's the purpose of Intentional Insights - to reach out and build people up to growing more rational over time. You don't have to be the one doing it, of course. I'm doing it. Others are doing it. But do you think it's better to improve the shit tsunami or put our hands in our ears and pretend it's not there and not do anything about it? I think it's better to improve the shit tsunami of Lifehack and other such sites. The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.
1Lumifer8y
Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit. Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of. The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it. You measure, basically, impressions -- clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline. So I repeat: how do you know?
0Gleb_Tsipursky8y
Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I'm willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!
-2Lumifer8y
I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level. As to the bet, please specify what is a "neutral reasonable" observer and how do you define "quality" in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y'know...
2gjm8y
Only if $1000 is an insignificant fraction of Gleb's wealth, or his utility-from-dollars function doesn't show the sort of decreasing marginal returns most people's do.
0Gleb_Tsipursky8y
Indeed, $1000 is a quite significant portion of my wealth.
0Gleb_Tsipursky8y
$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it. We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere. We can have gjm or another external observer recruit people just in case one of us doing it might bias the results. So, going through with it?
0Lumifer8y
Sorry, I don't enjoy gambling. I am still curious about "quality" which you say your article has and the typical Lifehacker swill doesn't. How do you define that "quality"?
0Gleb_Tsipursky8y
As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.
-2Gleb_Tsipursky8y
And I imagine that based on your response, you take your words back. Thanks!
0Lumifer8y
I am sorry to disappoint you. I do not.
-2Gleb_Tsipursky8y
Well, what kind of odds would you give me to take the bet?
0Lumifer8y
As I said, I'm not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother? Four out of five dentists recommend... X-)
-2Gleb_Tsipursky8y
Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That's the project of raising the sanity waterline.
0Lumifer8y
I am not interested in crossing the inference gap to people who like the darkest shade. They can have it. I don't think that raising the sanity waterline involves producting shit, even of particular colours.
-1Gleb_Tsipursky8y
You seem to have made two contradicting statements, or maybe we're miscommunicating. 1) Do you believe that raising the sanity waterline of those in the murk - those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving - is still raising the sanity waterline? 2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?
0Lumifer8y
I don't think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you're deluding yourself.
0Gleb_Tsipursky8y
Ok, I will agree to disagree on this one.
-2OrphanWilde8y
Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you're trading short-term gains for long-term difficulties overcoming that reputation (which, I'll note, the rationalist community already struggles with).
6Gleb_Tsipursky8y
Please avoid using terms like "poisoning" and other vague claims. It's an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!
2Gleb_Tsipursky8y
Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Don't use argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, to attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, are problematic. Thanks!
-10OrphanWilde8y
1bogus8y
That's a good strategy when you have GEICO's name recognition. If you don't, maybe getting noticed isn't such a bad thing. And maybe "One Weird Trick" is a gimmick, but then so is GEICO's caveman series - which is also associated with a stereotype of someone being stupid. Does the gimmick really matter once folks have clicked on your stuff and want to see what it's about? That's your chance to build some positive name recognition.
8Gleb_Tsipursky8y
Just wanted to clarify that people who are reading Lifehack are very much used to the kind of material there - it's cognitively easy for them and they don't perceive it as a gimmick. So their first impression of rationality is not as a gimmick but as something that they might be interested in. After that, they don't go to the Less Wrong website, but to the Intentional Insights website. There, they get more high-level material that slowly takes them up the level of complexity. Only some choose to go up this ladder, and most do not. Then, after they are sufficiently advanced, we introduce them to more complex content on ClearerThinking, CFAR, and LW itself. This is to avoid the problem of Endless September and other challenges. More about our strategy is in my comment.
3Lumifer8y
The useful words are "first impression" and "anchoring".
2Gleb_Tsipursky8y
I answered this point below, so I don't want to retype my comment, but just FYI.
1OrphanWilde8y
How many people had heard of the Government Employee's Insurance Company prior to that advertising campaign? The important part of "GEICO can save you 15% or more on car insurance" is repeating the name. They started with a Gecko so they could repeat their name at you, over and over, in a way that wasn't tiring. It was, bluntly, a genius advertising campaign. Your goal isn't to get noticed, your goal is to become familiar. You don't notice any other elements to the caveman series? You don't notice the fact that the caveman isn't stupid? That the commercials are a mockery of their own insensitivity? That the series about a picked-upon identity suffering from a stereotype was so insanely popular that a commercial nearly spawned its own TV show? Yes, the gimmick matters. The gimmick determines people's attitude coming in. Are they coming to laugh and mock you, or to see what you have to say? And if you don't have the social competency to develop their as-yet-unformed attitude coming in, you sure as hell don't have the social competency to take control of it once they've already committed to how they see you. Which is to say: Yes. First impressions matter.
2Gleb_Tsipursky8y
I answered this point earlier in this thread, so I don't want to retype my comment, but just FYI.

/writes post

/reads link to The Virtue of Silence

/deletes post

My overall updating from this thread has been:

Learning a lot more about the diversity of opinions and concerns among Less Wrongers.

  • 1) Learning that there are a lot more risk-averse people on LW who are opposed to experimenting with new things, learning from experience, improving going forward, and optimizing the world, than I had previously thought.

  • 2) Learned a lot about Less Wrongers' "ew" experiences and flinching away from [modern marketing], despite some getting it

  • 3) Learned that many Less Wrongers are strongly oriented toward perfectionism and bulletproof arguments at the expense of clarity and bridging inference gaps.
  • 4) Surprised to see positive updates on my character (1, 2)as the result of this discussion, and will pay more attention to issues of character in the future - I think I paid too much attention to content previously and insufficient attention to character.

Updated toward some different strategies with Intentional Insights

  • 1) Orienting Intentional Insights content more toward providing breadcrumbs of links toward more higher-quality materials than the people on Lifehack and The Huffington Post are currently reading
  • 2) Teaching our audience abou
... (read more)

Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.

So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)

Here's my take on it: I don't know. Like query, I don't pretend to be confident... (read more)

3Vaniver8y
This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you? (My impression is that "a word to the wise is sufficient," and so most clever people find it aggravating when someone expounds on simple principles for hundreds of pages, because of the implication that they didn't get it the first time around. Or they assume it's less principled than it is.)
3Raelifin8y
I picked 7 Habits because it's pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.
4Vaniver8y
I suspect the point will be clearer if stated without examples? I think you're pointing towards something like "most self-help does not materially improve the lives of most self-help readers," which seems fairly ambiguous to me. Most self-help, if measured by titles, is probably terrible simply by Sturgeon's Law. But is most self-help, as measured by sales? I haven't looked at sales figures, but I imagine it's not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful. It also seems to me that the information content of useful self-help is about pointing to places where applying effort will improve outcomes. (Every one of the 7 Habits is effortful!) Part of scientific self-help is getting an accurate handle on how much improvement in outcomes comes from expenditure of effort for various techniques / determining narrowly specialized versions. But if someone doesn't actually expend the effort, the knowledge of how they could have doesn't lead to any improvements in outcomes. Which is why the other arm of self-help is all about motivation / the emotional content. It's not clear to me that LW-style rationality improves on the informational or emotional content of self-help for most of the populace. (I think it's better at the emotional content mostly for people in the LW-sphere.) Most of the content of LW-style rationality is philosophical, which is very indirectly related to self-help.
8Richard_Kennaway8y
Another complication is that Sturgeon's Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. (Gated link, may not be accessible to all.) "When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” "Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end." Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
2Vaniver8y
Agreed; that's where I was going with my paragraph 3 but decided to emphasize it less.
2ChristianKl8y
"genuinely helpful" is a complicated term. A lot of books bring people to shift their attention to different priorities and get better at one thing while sacrificing other things. New Agey literature about being in the moment has advantages but it can also hold people back from more long-term thinking.
0[anonymous]8y
Another complication is that Sturgeon's Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. "When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” "Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end." Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
0Lumifer8y
That does not follow at all. The road to hell is in excellent condition and has no need of maintenance. Having a good goal in no way guarantees that what you do has net benefit and should be supported.
2Raelifin8y
I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn't likely to cause harm. Given that it isn't likely to hurt, and it might help, I think it makes sense to support in general. (To be clear: Just because something is a net positive (in expectation) clearly doesn't imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)
0Lumifer8y
A failure isn't likely to cause major harm, but by similar reasoning success is not likely to lead to major benefits as well. In simpler terms, InIn isn't likely to have a large impact of any kind. Given this, I still see no reason why minor benefits are more likely than minor harm.

The short version of my reaction is that when it comes to PR, it's better to be right than to be quick.

I expect II's effect is small, but seems more likely to be negative than positive.

3MrMind8y
I don't understand how this could possibly be true, for any common notion of "better". Imagine that British Petroleum responded only now to the leak disaster with a long, detailed technical report about how it was not their fault. The public opinion would have already set catastrophically against them.
5Vaniver8y
Movement-building is categorically different from incident response. I agree that transparency and speed are the important features when it comes to incidents. But when it comes to movement building, it seems like a bad first impression is difficult to remove, and questions of where to draw new adherents from has a significant impact on the community quality. One also has to deal with the fact that many people's identities are defined at least in part negatively--if something is a thing that those people like, then they'll dislike it because of the anticorrelation of their preferences with their enemies.
1Elo8y
I think you might be treating the premise given in an uncharitable way. If however you suggest that, more frequently than not - "it's better to be quick than to be right" (the opposite opinion) - then carry on. Which is it?
2Gleb_Tsipursky8y
I'm curious why you think the effect will be small and negative. Here is the broad strategy that we have. I'd like your thoughts on it and how to optimize it - always looking for better ways to do things.

Do you believe that the "one weird trick to effortlessly lose fat" articles promote healthy eating and are likely to lead people to approach nutrition scientifically?

9MrMind8y
Beware of other-modeling! Average Lumifer is most definitely not a good model of average person. Does "one weird trick" promotes improvement? I don't know, but I do know that your gut reaction is not a good model for the answer.
3Lumifer8y
Oh, boy, am I not :-D I do know some "more average" people, though, and they don't seem to be that easily taken by cheap tricks, at least after the first dozen times :-/ And as OrphanWilde pointed out, the aim of clickbait is not to convince you of anything, it is solely to generate the ad impressions. I would surprised if "one weird trick" diets promoted any improvement, in part because most any diet requires some willpower and the willingness to stick with it for a while -- and the weird tricks are firmly aimed at people who have, on a good day, the attention span of a goldfish...
3Gleb_Tsipursky8y
Yes, if the "one weird trick" is a science-based approach, such as "be intentional about your diet and follow scientific guidelines," and leads people to other science-based strategies. Here's how I did it in this article. Do you think the first "weird trick" will not result in people having greater mental strength?
-1bogus8y
If you think that the Shangri-La diet "promotes healthy eating" and is scientifically-based, what's wrong with promoting it as 'one weird trick to effortlessly lose fat'? It has the latter as an express goal, and is certainly, erm, weird enough.
-3Lumifer8y
What's wrong is that you are reinforcing the "grab the shiniest thing which promises you the most" mentality and as soon as the Stuff-Your-Face-With-Cookies diet promises you losing fat TWICE AS FAST!!eleven! the Shangri-La diet will get defenestrated as not good enough.
-1bogus8y
See, the difference is that the Shangri-La diet has some scientific backing, which the Stuff-Your-Face-With-Cookies diet conspicuously lacks. So, the former will win in any real contest, at least among people who are sufficiently rationally-minded[1]. Except that it won't, if you can't promote your message effectively. This is where your initial pitch matters. [1] (People who aren't rationally-minded won't care about 'rationality', of course, so there's little hope for them anyway.)
0ChristianKl8y
I do believe that it works, but "scientific backing"? Did I miss some new study on the Shangri-La diet, or what are you talking about?
1Vaniver8y
People often use "scientific backing" to mean "this extrapolates reasonably from evidence" rather than "this has been tested directly."
3ChristianKl8y
If you use the word scientific that way I think you lose a quite valuable word. I consider NLP to be extrapolated from evidence. I even have seen it tested directly a variety of times. At the same time I don't consider it to be scientific in the popular usage of 'scientific'. For discussion on LW I think Keith Stanovich criteria's for science are good:
4Gleb_Tsipursky8y
Agreed, good definition of science-backed.

On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world.

To me it seems like you conflate the brand of rationality and a body of ideas with rationality as defined in our wiki "Rationality is the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality".

To me it seems that Gleb is

... (read more)
6Gleb_Tsipursky8y
As OrphanWilde correctly pointed out in his post, if something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. You are the target audience for the Fact Checking article. It's written for your level, and that of other rationalists. That leads me to a broader point. ClearerThinking is a great site! In fact, I just had a great conversation with Spencer Greenberg about collaborating. As he told me, ClearerThinking targets people who are already pretty interested in improving their decision-making, and want to take the time to do quizzes and online courses. Intentional Insights hits a couple of levels below that. It goes for people who are not aware that the human mind is suboptimal in its decision-making structure, and helps make them aware of it. Then, it gives them easy tools and resources to improve their thinking. After sufficient improvement, we aim to provide them with tools from ClearerThinking. Spencer and I specifically talked about ways we could collaborate together to set up a good channel to send people on to ClearerThinking in an organized and cohesive manner, and we'll be working on setting that up. For more on our strategy, see my comment below.

Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire.

I don't think that a few articles like those will make someone pick up rationality as attire who wasn't already in that area beforehand.

Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationali

... (read more)
3Gleb_Tsipursky8y
But how will people find out your house is in order unless you reach out to them and tell them? That's the whole point of the Intentional Insights endeavor - to show people how they can have a better life and have their house be more in order through engaging with science-backed rational thinking strategies. In the language of houses, it's the Gryffindor arm of Hufflepuff, reaching out to others and welcoming them into rational thinking.
3ChristianKl8y
Because happy people talk to their friends about their experiences. Personal recommendations carry a lot more weight than popular mainstream articles. There are people who believe in scientism and will take a thinking strategy because someone says that it's science-based. That's not what rationality is about. I think part of this community is not simply following the authority but wanting to here the chain of reason why a certain strategy is science-based.
2Gleb_Tsipursky8y
Yes, personal recommendations carry more weight. But mainstream articles have a lot more reach. As I described here, the Lifehack article was viewed by many thousands of people. This is the point of writing for a broad audience. Moreover, as you can see from the discussion you and I had about a previous article, the articles are based on research.
1ChristianKl8y
My comments are based on my experience with doing media interviews that have 2 orders of magnitude more reach.
1Gleb_Tsipursky8y
Have you done them on a consistent basis, as I am able to do Lifehack articles every couple of weeks? I have also just published an article in the Sunday edition of a newspaper, described here, with a paper edition reaching 420K readers and monthly visits of 5 million.
2ChristianKl8y
In 2012 I talked to rougly one journalist per month. Okay, that's more than thousands. I think that article. There's no deep analysis what ISIS wants but that's okay for a mainstream publication and recruiting is a factor. In case you write another article about ISIS, I would recommend as background reading: http://www.theglobeandmail.com/globe-debate/the-strategic-value-of-compassion-welcoming-refugees-is-devastating-to-is/article27373931/ http://www.theatlantic.com/magazine/archive/2015/03/what-isis-really-wants/384980/
1Gleb_Tsipursky8y
Cool, thanks for the links, much appreciated! Separately, I'd be curious about your experience talking to journalists about rationality. Think you can do a discussion post about that?
0ChristianKl8y
My topic was Quantified Self with adjacent but not direct.

Somehow in this context the notion of "picking the low-hanging fruit" keeps coming up. This is prejudgmental and one would have a hard time disagreeing with such an action. Intentional Insights marketing is also discussed on Facebook. I definitely second the thence stated opinion that the suggested T-Shirts and rings are counterproductive and, honestly, ridiculous. Judging the articles is seems more difficult. If the monthly newsletter generates significant readership, this might be useful in the future. However, LW and Rationality FB groups already have their fair share of borderline self-help questions. I would not choose to further push in this direction.

3Viliam8y
I was also unimpressed by the T-shirts. It's just... I think it's easier to move from "bad shirts" to "good shirts" than from "no shirts" to "good shirts". It's just a different bitmap to print. (My personal preference about shirts is "less is better". I would like to have a T-shirt only saying "LessWrong.com", and even that with smaller letters, not across the whole body. And preferably not a cheap looking shirt; not being white would probably be a good start.) Generally, what I would really like is something between the Intentional Insights approach, and what we are doing now. Something between "hey, I'm selling something! look here! look here! gimme your money and I will teach you the secret!" and "uhm, I'm sitting here in the corner, bumbling something silently, please continue to ignore me, we are just a small group of nerds". And no, the difference is not between "taking money" and "not taking money"; CFAR lessons aren't free either. Seems to me that nerds have the well-known bias of "too much talking, no action". That's not a reason to go exactly the opposite way. It's just... admirable what a single dedicated person can do.
3Gleb_Tsipursky8y
Thanks for the positive sentiment about the single dedicated person! Just FYI, there's much more that Intentional Insights does than the click-bait stuff on Lifehack. We try to cover the whole range between CFAR's targeting of the top 5%, and ClearerThinking's targeting of techy young people in the coastal cities already interested in decision-making (the latter is from my conversations with Spencer Greenberg). We've been heavily orienting toward the skeptic/secular market as a start, and then right now are going into the self-improvement sector and also policy/politics commentary. We offer a wide variety of content, much of it higher-level than the self-improvement articles. I talk more about this topic in my comment about our strategy. To be clear about taking money, Intentional Insights is a 501(c)(3) nonprofit organization, not a for-profit company. The vast majority of our content is free, and we make our way mainly on donations. P.S. Will keep in mind your preferences for a shirt. We currently have one that looks a lot like what you describe, here Can you let me know how that looks compared to your ideal?
1Viliam8y
Colors: great. (The grey-brown and pink versions are also okay. I guess any version other than white is okay.) Font size: still too large.
2Gleb_Tsipursky8y
By how many percent smaller would be good?
1Viliam8y
Two inches high at most. However, feel free to ignore me; I almost surely won't buy the shirt, and other people may have different preferences. Anyway, this is off-topic, so I won't comment here about the shirts anymore.
1Lumifer8y
I think quite the reverse. Inertia is a thing and bad shirts are "we already have them". Making some shirts is a low-effort endeavour -- just throw the design at CafePress or Zazzle and you're done.
4Gleb_Tsipursky8y
I prefer the experimental approach, of experimenting and then figuring out better ways to do things. This is how the most successful startups work. Besides, we are doing new t-shirts now based on the feedback. Your thoughts on these two options would be helpful 1 and 2.
0Lumifer8y
For this you need a way to measure and assess outcomes. What is the metric that you are using to figure out what's "better"?
-1Gleb_Tsipursky8y
Feedback from aspiring rationalists :-)
2Gleb_Tsipursky8y
I hear you about the t-shirts and rings, and we are trying to optimize those. Here are two options of t-shirts we think are better: 1 and 2. What do you think?

I find your chutzpah impressive.

6Gleb_Tsipursky8y
Thanks, I try to not be knocked down by negative feedback, and instead welcome bad news as good news and optimize :-)
2signal8y
They are, but I still would not wear them. (And no rings for men unless you are married or have been a champion in basketball or wrestling.) Let's differentiate two cases in whom we may want to address: 1) Aspiring rationalists: That's the easy case. Take an awesome shirt, sneak in "LW" or "pi" somewhere, and try to fly below the radar of anybody who would not like it. A moebius strip might do the same, a drawing of a cat in a box may work but also be misunderstood. 2) The not-yet aspiring rationalist: I assume, this is the main target group of InIns. I consider this way more difficult, because you have to keep the weirdness points below the gain. And you have to convey interest in a difficult-to-grasp concept on a small area. And nerds are still less "cool" than sex, drugs, and sports. A Space X T-Shirt may do the job (rockets are cool), but LW concepts? I haven't seen a convincing solution, but will ask around. Until then, the best solution to me seems to dress as your tribe expects you to find other ways of spreading the knowledge.
1Gleb_Tsipursky8y
1) For actual aspiring rationalists, we do want to encourage those who want to promote rationality to be able to do so through shirts they would enjoy. For example, how does this one strike you. 2) For the not-yet aspiring rationalist, do you think the shirts above, 1 and 2, do the job?

Trying to teach someone to think rationally is a long process -- maybe even impossible for some people.

This is not incompatible with marketing persay - marketing is about advocacy, not teaching. And pretty much all effective advocacy has to be targeted to System 1 - the "heart" or the "gut" - in fairly direct terms.

To me, it seems that CFAR was supposed to be working on this sort of stuff, and they have not accomplished all that much. So I think, in a way, we should be welcoming the fact that Gleb T./International Insights are now trying to fill this void. Maybe they aren't doing it very well at this time, but that's a separate matter.

7ChristianKl8y
CFAR mission wasn't marketing but actually teaching people to be more rational in a way that helps them actually be more rational. As far as I understand they are making progress on that goal and the workshops they have now are better than at the beginning. As rationalists we have responsibility for actually giving useful advice. CFAR's approach of first focusing on figuring out what's useful advice, instead of first focusing on marketing is good.
3Viliam8y
Sounds like false dilemma. How about splitting CFAR into two groups? The first group would keep inventing better and better advice (more or less what CFAR is doing now). The second group would take the current results of their research, and try to deliver it to as many people as possible. The second group would also do the marketing. (Actually, the whole current CFAR could continue to be the first group; the only necessary thing would be to cooperate with the second one.) You should multiply the benefit from the advice by the number of people that will receive the advice. Yeah, it's not really a linear function. Making one person so super rational that they would build a Friendly AI and save the world may be more useful than teaching thousands of people how to organize their study time better. But I still suspect that the CFAR approach is to a large degree influenced by "how we expect people in academia to behave".
7Gleb_Tsipursky8y
I actually spoke to Anna Salamon about this, and she shared that CFAR started by trying a broad outreach approach, and found it was not something they could make work. That's when they decided to focus on workshops targeting a select group of social elites who would be able to afford their high-quality, high-priced workshops. And I really appreciate what CFAR is doing - I'm a monthly donor. I think their targeting of founders, hackers, and other techy social elites is great! They can really improve the world through doing so. I also like their summer camps for super-smart kids, and training for Effective Altruists, too. However, CFAR is not set up to do mass marketing, as you rightly point out. That's part the reason we set up Intentional Insights in the first place. Anna said she looks forward to learning from what we figure out and collaborating together. Also working with ClearerThinking as well, which I described in my comment here.
3ChristianKl8y
Given the amount of akrasia in this community I'm not sure we are at a point where we have a good basis on lecturing other people about this. Given the current urge propagtion exercise a lot of people who got it taught in person and who have the CFAR texts can't do it successfully. Iterating on it till it reaches a form that people can take and use would be good. From my understanding CFAR doesn't want to convince academia directly and isn't planning on running any trials themselves at the moment that they will publish. I would appreciate if CFAR would publish their theories publically in writting sooner but I hope the will publish in the next year. I don't have access to the CFAR mailing list and I understand that they do get feedback on their writing via the mailing list at the moment. CFAR very recently renamed implentation intentions into Trigger Action Plans (TAP's). If we already would have marketed implentation intentions widely as vocabulary it would be harder to change the vocabulary. Landmark reaches quite a lot of people and most of their core ideas aren't written down in small articles. Scientology would be another organisation that tries to do most idea communication in person. It still reached a lot of people. When doing Quantified Self community building in Germany, the people who came to our meetups mostly didn't came because of mainstream media but other sources. It got to the point of another person telling me that giving media interviews is just for fun and not community building.
0bogus8y
And this makes a lot of sense, if you assume that advocacy is completely irrelevant to teaching people to be more rational. ('Marketing' being just another word for advocacy.) But what if both are necessary and helpful? Then it makes sense for someone to work on marketing rationality - either CFAR itself, Intentional Insights or someone else entirely.
0ChristianKl8y
CFAR certainly need to do some form of marketing to get people to come to it's workshops. As far as I understand CFAR succeeds at the task of doing marketing enough to be able to sell workshops. As we as a community get better at actually having techniques to make peopel more rational we can step up marketing.
0bogus8y
Well...cracked/buzzfeed-style articles vs. niche workshops. I wonder which strategy has a broader impact?
1ChristianKl8y
I don't think the cracked/buzzfeed-style articles are going to change much about the day-to-day decision making of the people who read them. CFAR workshop on the other hand do. But that isn't even the whole story. CFAR manages to learn about what works and what doesn't work through their approach. It allows them to gather empiric data that will in the future also be able to be transmitted through a medium that's less intensive than a workshop. This rationalist community isn't about New Atheism where they know the truth and the problem is mainly that outsiders don't know the truth and they have to bring the truth to them.
2Gleb_Tsipursky8y
Actually, you'd be surprised at what kind of impact can be had through Lifehacker-type articles. Dust specks if sufficiently large in nature are impactful, after all. And the Lifehacker articles reach many thousands. Moreover, it's a question of continuous engagement. Are people getting engaged with this content more than just passing on to another Lifehack article? We have evidence that they are, as described in my comment here.
2ChristianKl8y
Dust specks don't cost a person time to consume. The also have no opportunity cost. Your articles on the other hand might have opportunity cost. Furthermore it's not clear that the articles have a positive effect. As an aside the article's do have SEO advantages through their links that are worth appreciating even if the people who read them are't affected. I don't see evidence that you succeed in getting people to another site via your articles. Do you have numbers?
1Gleb_Tsipursky8y
I describe the numbers in my comment here about the only website where I have access to the backend. Compare the article I put out on Lifehack to other articles on Lifehack. Do you think my article on Lifehack has better return on investment than a typical Lifehack article?
-1Gleb_Tsipursky8y
My take is that the goal is to give more useful advice than what people are currently getting. For example, giving science-based advice on relationships as I do in this article is more useful than simple experience-based advice, which is what the vast majority of articles on self-improvement do. If it is better than why not do it? Remember, the primary goal of Intentional Insights is not to market rationality per se, but raise the sanity waterline as such. Promoting rationality comes only after people's "level of sanity" has been raised sufficiently to engage with things like Less Wrong and CFAR. More in my comment on this topic.
5Gleb_Tsipursky8y
Thanks for the support! We are trying to fill a pretty big void. However, just to clarify, marketing is only one aspect of what we are doing. We have a much broader agenda, which I describe in my comment here. And I'm always looking for ideas on how to do things better!
2Vaniver8y
This is not at all obvious to me. If someone tries something, and botches it, then when someone else goes to try that thing they may hear "wait, didn't that fail the last time around?"
4[anonymous]8y
This seems like a full general counterargument against trying uncertain things...
1Vaniver8y
Agreed that it's a fully general counterargument. I endorse the underlying point, though, of "evaluate second order effects of success and failure as well as first order effects," and whether or not that point carries the day will depend on the numbers involved.
3Gleb_Tsipursky8y
I'd be curious to hear why you think Intentional Insights is botching it if you think that is the case - it's not clear from your comment. However, I disagree with the premise that someone botching something means other people won't do it. If that was the case, then we would have never had airplanes, for example. People will be actually more likely to try it in order to do something better because they see something has been done before and know the kind of mistakes that were made.

Ohh, rationalist drama... is that gold I smell?

LW is a fairly mature site and I'm sure somebody did this already, in one variation or another, both marketing, and discussing said marketing. Can any veteran confirm or deny my speculation?

(I have a longer post saved, but in the middle of it I just thought that I'm re-inventing the wheel.)

5Gleb_Tsipursky8y
I had a conversation about this with many rationalists, including CFAR's people, and there hasn't been many efforts or discussion of said efforts. Here's probably the most widely-known example, and Liron is currently on our Advisory Board, though he'll be stepping off soon due to time limitations while still providing informal advice. However, I'd love to learn about stuff I don't know about, so if someone has some stuff to share, please let me know!

Ah. My response to you was in error. You approve.

The issue isn't that he's marketing rationality. The issue is that what he's doing has nothing to do with rationality, it's pseudo-rationalist babble. He markets rationality in the same way that homeopathy markets healthcare. His marketing doesn't add an easily-corrected flawed version of rationality, it is a ritual designed to exorcise the demons of bias and human suffering and he literally promises to help you find a purpose in life through science. Which is to say, what he's doing isn't marketing, it's religion.

2Viliam8y
Some people could say the same thing about CFAR. So, let's focus on specific details how these two are different.
-6OrphanWilde8y
1Gleb_Tsipursky8y
Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Instead, I would suggest you clarify how a science-based book written by a scholar of meaning and purpose, which I happen to be, and endorsed by numerous prominent people, is not helpful for people finding a personal sense of purpose in life as informed by science-backed methods of doing so. Thanks!