Jiro comments on Leaving LessWrong for a more rational life - Less Wrong

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: Error 22 May 2015 02:56:13AM *  8 points [-]

Despite Yudkowsky's obvious leanings, the Sequences are not about FAI, nor [etc]...they are first and foremost about how to not end up an idiot. They are about how to not become immune to criticism, they are about Human's Guide to Words, they are about System 1 and System 2.

I've always had the impression that Eliezer intended them to lead a person from zero to FAI. So I'm not sure you're correct here.

...but that being said, the big Less Wrong takeaways for me were all from Politics is the Mind-Killer and the Human's Guide to Words -- in that those are the ones that have actually changed my behavior and thought processes in everyday life. They've changed the way I think to such an extent that I actually find it difficult to have substantive discussions with people who don't (for example) distinguish between truth and tribal identifiers, distinguish between politics and policy, avoid arguments over definitions, and invoke ADBOC when necessary. Being able to have discussions without running over such roadblocks is a large part of why I'm still here, even though my favorite posters all seem to have moved on. Threads like this one basically don't happen anywhere else that I'm aware of.

Someone recently had a blog post summarizing the most useful bits of LW's lore, but I can't for the life of me find the link right now.

Comment author: Jiro 23 May 2015 02:51:44AM *  4 points [-]

As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well--people listen to them and actually become more rational, and actually becoming rational doesn't lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things--I hadn't heard of Pascal's Mugging before I came here, and it certainly wasn't intended to be used as an argument against cryonics or AI risk, but it's pretty useful for that purpose anyway.

Comment author: [deleted] 23 May 2015 01:56:48PM 2 points [-]

How is Pascal's Mugging an argument against cryonics?

Comment author: Jiro 23 May 2015 02:41:37PM 1 point [-]

It's an argument against "even if you think the chance of cryonics working is low, you should do it because if it works, it's a very big benefit".

Comment author: [deleted] 23 May 2015 02:53:57PM *  1 point [-]

Ok, it's an argument against a specific argument for cryonics. I'm ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.

That said, I'm still signing up for Alcor. But I'm aware of the issues :\

Comment author: Error 23 May 2015 01:42:57PM 2 points [-]

As another person who thinks that the Sequences and FAI are nonsense

Clarification: I don't think they're nonsense, even though I don't agree with all of them. Most of them just haven't had the impact of PMK and HGW.