I agree. The difficult thing about introducing others to Less Wrong has always been that even if the new person remembers to say "It's my first time, be gentle". Less Wrong has the girth of a rather large horse. You can't make it smaller without losing much of its necessary function.
Updated link to Piers Steel's meta-analysis on procrastination research (at least I think it's the correct paper): http://studiemetro.au.dk/fileadmin/www.studiemetro.au.dk/Procrastination_2.pdf
I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).
"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme
Perhaps a better branding would be "effective decision making", or "effective thought"?
As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.
I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven't been placed to notice anyone's arrogance. But I'm a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I'm significantly less certain of my correctness in any argument.
We know that knowing about biases doesn't remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn't even require pride as an expense since we're also adjusting our estimates of everyone else's sanity down a similar amount. As a check to see if we're doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.
This was enjoyable to me because "saving the world", as you put it, is completely unmotivational for me. (Luckily I have other sources of motivation) It's interesting to see what drives other people and how the source of their drive changes their trajectory.
I'm definitely curious to see a sequence or at least a short feature list about your model for a government that structurally ratchets better instead of worse. That's definitely something that's never been achieved consistently in practice.
I think he means "create a functional human you, while primarily sourcing the matter from your old body". He's commenting that slicing the brain makes this more difficult, but it sounds like the alterations caused by current vitrification techniques make it impossible either way.
The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wrong. For example the calculations given for total utilitarianism fail to include values for continuity of experience.
This isn't to say that ethics are easy, but we're going to have a devil of a time testing them with impoverished input.
If the primary motivation for attending is the emotional rewards of meeting others with interest in rationality and feeling that you've learned how to be more rational, then yes, a Christian brainwashing retreat would make you glad you attended it in the same way, if and only if you are/became Christian (since non Christians likely wouldn't enjoy a Christian brainwashing retreat.)
That said, as many of us have little/no data on changes in rationality (if any) of attendees, attending is the only real option you have to test whether it might. Confirmation bias would make a positive result weak evidence, but it'd be relatively important given the lack of other evidence. Luckily even if the retreat doesn't have benefits to your objective level of rationality it sounds worthwhile on the undisputed emotional merits.
I think what SilasBarta is trying to ask is do we have any objective measurements yet from the previous minicamp that add weight to the hypothesis that this camp does in fact improve rationality or life achievement over either the short or long term?
If not then I'm still curious, are there any plans to attempt to study rationality of attendees and non-attendees to establish such evidence?
Anecdotally: I'm not diabetic that I know of, but my mood is highly dependent on how well and how recently I've eaten. I get very irritable and can break down into tears easily if I'm more than four hours past due.
I recently watched this Coursera course on learning how to learn and your post uses different words for some of the same things.
The course described what you call "shower-thoughts" as "diffuse mode" thinking, with an opposite called "focused mode" thinking and the brain only able to do one at a time. Focused mode uses ideas that are already clustered together to solve familiar problems while diffuse mode attempts to find useful connections between unclustered ideas to solve new problems in new ways. Not sure if these are the formal/correct terms from the literature that was behind the class, but if so it might be worth using them instead of making up our own jargon.
As for the class it definitely had some stuff that I still try to keep in mind, but it also had some things that I haven't quite figured out how to incorporate (chunking) or didn't find useful (some of the interviews). There is some overlap with what CFAR seems to be trying to teach. Overall I'd recommend taking a look if you have an hour or so per week over a month for it.