What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:
Trying to teach someone to think rationally is a long process -- maybe even impossible for some people. It's about explaining many biases that people do naturally, demonstrating the futility of "mysterious answers" on gut level; while the student needs the desire to become stronger, the humility of admitting "I don't know" together with the courage to give a probabilistic answer anyway; resisting the temptation to use the new skills to cleverly shoot themselves in the foot, keeping the focus on the "nameless virtue" instead of signalling (even towards the fellow rationalists). It is a LW lesson that being a half-rationalist can hurt you, and being a 3/4-rationalist can fuck you up horribly. And the online clickbait articles seem like one of the worst choices for a medium to teach rationality. (The only worse choice that comes to my mind would be Twitter.)
On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world. Not to be replaced by something more lesswrongish, but simply by anything else that usually appears in the given medium. Would pressing that button make the world a more sane place? What would have happened if someone had pressed that button hundred years ago? In other words, I'm trying to avoid the "nirvana fallacy" -- I am not asking whether those articles are the perfect vehicle for x-rationality, but rather, whether they are a net benefit or a net harm. Because if they are a net benefit, then it's better having them, isn't it?
Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire. Is it really that wrong? Aren't people already having absurdly wrong ideas about rationality? Remember all the "straw Vulcans" produced by the movie industry; Terminator, The Big Bang Theory... Rationality already is associated with being a sociopathic villain, or a pathetic nerd. This is where we are now; and the "rationality" clickbait, however sketchy, cannot make it worse. Actually, it can make a few people interested to learn more. At least, it can show people that there is more than one possible meaning of the word.
To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup. But if we look at the larger picture, that specific outgroup (people who procrastinate by reading clickbaity self-improvement articles) actually aren't that different from us. They may actually be our nearest neighbors in the human intellectual space. So what some of us (including myself) feel here is the uncanny valley. Looking at someone so similar to ourselves, and yet so dramatically different in some small details which matter to us strongly, that it feels creepy.
Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention. So why not accept the imperfection of the world, and just do what we can.
As a sidenote, I don't believe we are at risk of having an "Eternal September" on LessWrong (more than we already have). More people interested in rationality (or "rationality") will also mean more places to debate it; not everyone will come here. People have their own blogs, social network accounts, et cetera. If rationality becomes the cool thing, they will prefer to debate it with their friends.
EDIT: See this comment for Gleb's description of his goals.
Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.
So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)
Here's my take on it: I don't know. Like query, I don't pretend to be confident one way or the other. I'm not as scared of "horrific long-term negative impact", however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.
Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn't even use the "rationality" label. I'd argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).
In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn't actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I'm not a strong believer that it will succeed. What I do think, though, is that there's enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren't confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.
* - Okay, it's not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P
(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)
This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you?
(My impression is that "a word to the wise is sufficient," and so most clever people find it... (read more)