Related to: Goals for which Less Wrong does (and doesn’t) help
I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.
1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”
2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.
3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.
4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.
5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.
6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.
7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.
There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.
And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.
So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.
So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.
I've been looking around the site for awhile, having several people I know who go here. What I've learned is unfortunately that I'm unlikely to be able to learn from this site unless something changes. Which is too bad because I don't think I'm unable to learn in general.
I have no academic background whatsoever, and no expertise in science or philosophy. I am not an intellectual. I am good at noticing jargon, but terrible at picking it up and being able to use and understand it. I have no particular skill in abstract thinking. While tests aren't everything, I score in the range of borderline intellectual functioning on IQ tests and I do so for a reason: I am quite lacking in several standard cognitive abilities.
I also have obvious cognitive strengths, writing among them, but they don't match up with the ones necessary to navigate this site. From my perspective, reading this site is like trying to read a book with several words per sentence chopped out, and the words that remain being used in /ways/ that don't match well with my ability to comprehend.
Normally I would just turn around and walk away. I don't think anyone here has any particular desire to see someone like me shut out. I find it saddening though that a site dedicated to helping people think more accurately is mostly dominated by people who have a good deal of intellectual skills already. I would be curious to see how the ideas here could be modified to assist people who are not typical users here. People who can't read mountains of text in order to prepare themselves for the conversations that are taking place, and who need things explained in ways that are understandable even if you're average or even a slow learner.
This isn't meant as an attack, just a suggestion for new directions the site could take in order to benefit people who aren't all that intellectual. You don't have to have all the traditional cognitive abilities to appreciate the importance of thinking clearly about reality. I even bet that the techniques would have to be modified for some of us who can't hold complex ideas in our heads. But modifying them would be a good way to show there's more than a single set of cognitive techniques to get to the same goal of understanding the world as accurately as possible.
Please excuse me if you've already had someone suggest this to you (and you probably have), but: have you looked through the sequences? They're the closest thing to a tutorial this site has, and many of them are (a) written in everyday language and (b) pretty darn useful, and not just for sounding informed while participating in discussions on this website. :-)