Error comments on Leaving LessWrong for a more rational life - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
I understand “politics is the mind-killer” enough to not consider LW community as a tribe that I have to belong to, and I could easily turn away from LW and say “the Sequences and FAI is nonsense”, just like I turned away from various gurus and ideologies before. But I disagree with what you saying, not the Sequences or MIRI criticism, but with your evaluation of LW community and your unwillingness to engage anymore. Honestly, I'm upset that you suddenly stopped the reading group.
Despite Yudkowsky's obvious leanings, the Sequences are not about FAI, nor they are about Many-Worlds, Tegmark Mathematical Universe, Roko's Basilisk or whatever. They are first and foremost about how to not end up an idiot. They are about how to not become immune to criticism, they are about Human's Guide to Words, they are about System 1 and System 2.
I don't care about Many Worlds, FAI, Fun theory and Jeffreyssai stuff, but LW was the thing that stopped me from being a complete and utter idiot. Now I see that people I care about, due to not internalizing LW's simple truths, are being complete and utter idiots, with their death spirals, and tribal affiliations, and meaningless usage of words, and theories that don't predict shit, and it breaks my heart.
If you want to criticize LW for lack of actual instrumental rationality, you're not the first, Yvain did that in 2009, and he was right in understanding the problem, though he couldn't provide a solution either. I personally believe that combating akrasia is the most important task in the world, not FAI, because if a cure for akrasia could be found, we could train armies of superhuman-scientists, who then would solve cancer, nanotechnology and AI-risk. That's why reading modern cognitive sciences and CBT and neuroscience is probably more important than everything, at least that's what I think.
And here I am, somebody who wishes to be part of LW community, but also disagreeing, either conceptually or politically, with much of the LW memes. Yet you don't want to engage with me anymore. LW is not a monolith, where everybody follows Yudkowsky, it's the most contrarian (and thus mentally healthy) place I've ever seen on the Web.
LW is not end of it all, but the Sequences are the bare minimum that people require to be sane. Hey, some people through sheer study of maths and physics can develop correct epistemology, so they don't need the Sequences, but I wasn't, and many people aren't.
It's not about tribal things. If you had your own forum with lots of people, who share similar criticism of LW, hey, I'd go there and leave LW. But you don't have such forum, so by leaving LW you just leave people like me alone. What's the point of that? Do you really believe leaving LW like that is more utility, than trying to create an island within it?
Honestly, I even started thinking the only reason you wrote this post because you realized you're too lazy to continue the reading group, so you needed a good excuse. But that's ridiculous, and I assign very low probability to that.
The sole point of my comment is this. I'm not upset because of your fundamental disagreement with Yudkowsky and LW's ideology and memes. I'm upset because you stop the reading group, which is important, because, like I said, the Sequences are about basic rational thinking, not deep philosophy, in which Yudkowsky indeed might be completely wrong. I'm upset, because your departure would mean that you think that LW is completely lost, and there is not at least a sizable minority, who'd say “you know what, you're right, let's do something about it”. That's sad.
(I'll update this post with more thoughts)
I've always had the impression that Eliezer intended them to lead a person from zero to FAI. So I'm not sure you're correct here.
...but that being said, the big Less Wrong takeaways for me were all from Politics is the Mind-Killer and the Human's Guide to Words -- in that those are the ones that have actually changed my behavior and thought processes in everyday life. They've changed the way I think to such an extent that I actually find it difficult to have substantive discussions with people who don't (for example) distinguish between truth and tribal identifiers, distinguish between politics and policy, avoid arguments over definitions, and invoke ADBOC when necessary. Being able to have discussions without running over such roadblocks is a large part of why I'm still here, even though my favorite posters all seem to have moved on. Threads like this one basically don't happen anywhere else that I'm aware of.
Someone recently had a blog post summarizing the most useful bits of LW's lore, but I can't for the life of me find the link right now.
I'm not sure if this is what you were thinking of (seeing as how it's about a year old now), but "blog post summarizing the most useful bits of LW's lore" makes me think of Yvain's Five Years and One Week of Less Wrong.
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I'm having trouble coming up with exact citations however, since it's not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human's guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky's own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one's priors. And either you agree with the priors, or you don't. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\
As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well--people listen to them and actually become more rational, and actually becoming rational doesn't lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things--I hadn't heard of Pascal's Mugging before I came here, and it certainly wasn't intended to be used as an argument against cryonics or AI risk, but it's pretty useful for that purpose anyway.
How is Pascal's Mugging an argument against cryonics?
It's an argument against "even if you think the chance of cryonics working is low, you should do it because if it works, it's a very big benefit".
Ok, it's an argument against a specific argument for cryonics. I'm ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I'm still signing up for Alcor. But I'm aware of the issues :\
Clarification: I don't think they're nonsense, even though I don't agree with all of them. Most of them just haven't had the impact of PMK and HGW.