Related to: Goals for which Less Wrong does (and doesn’t) help
I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.
1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”
2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.
3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.
4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.
5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.
6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.
7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.
There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.
And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.
So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.
So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.
My gains from LessWrong have come in several roughly distinct steps, all of which have come as I've been working my way through the Sequences. (Taking notes has really helped me digest and cement the information.)
1) Internalizing that there is a real world out there, and like Louie said, that ideas can be right or wrong. Making beliefs pay rent, referents and references, etc. A perspective on beliefs that they should accurately reflect the world and be used for achieving things I care about; all else is fluff. That every correct map should agree with every other, so that life does not seem such a disconnected jumble of different domains. Overall these kinds of insights really helped to give focus to my thoughts and clear out the clutter of my mind.
2) Having a conception of what beliefs should do, LessWrong helps me be aware of and combat various biases that interfere with the formation of accurate beliefs, and with taking coherent action based on those beliefs. I've made large gains here, but of course I'm not finished.
3) Forming a coherent, productive, happy me. Bootstrapping and snowballing effects. As I learn more, I seek out more good information, better. On this point, see Anna Salamon's posts going back to "Humans are not automatically strategic." The book "The Art Of Learning" by Josh Waitzkin has been immensely helpful. Learning about Cognitive Behavioral Therapy (this book is good) has been very helpful in being empirical and rational about the self. I believe this is basically the material of the Luminosity sequence, though I read those posts some time ago and should probably review them.
There's far too much to go into specifically, but the transformation has been huge, and continues. When conversing with non-rationalists, arguments feel like a match between Bruce Lee and some guy off the street. It's not that I have any more raw intellectual power than I had before, but my set of tools/training has improved tremendously. Unfortunately without being a rationalist, a non-rationalist doesn't (much) realize the extent to which they're outmatched, and indeed there is seldom a point to "beating someone." Instead you realize that even a poorly argued-position can be correct, looking out for points you may have missed, and perhaps try and introduce a few concepts. It feels like I'm working at a level above most people, that the conversation is a different thing to me and them; it's not like I can just tell them all this. I discovered LessWrong through an interest in existential risk and at first it seemed kind of boring, not very useful, this weird academic exercise. I wish I could convey to more people how helpful it's been, and the extent to which I didn't know what I didn't know.
(A note on the community: I think it's great that it's here, and I think that some really great material has been produced, and continues to be produced, beyond the "core" material by Eliezer. That said, I almost never read comments and I only read front-page, promoted posts; the return on time for reading anything else doesn't seem great enough right now, compared to my other work and studies. Just to give an idea on how I'm using LessWrong.)