So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
- that other people see themselves differently, and should be understood on their terms (mostly from here)
- that I can pay attention to what I'm doing, and try to notice patterns to make intervention more effective.
- the whole utilitarian structure of having a goal that you take actions to achieve, coupled with the idea of an optimization process. It was really helpful to me to realize that you can do whatever it takes to achieve something, not just what has been suggested.
- the importance/usefulness of dissolving the question/how words work (especially great when combined with previous part)
- that an event is evidence for something, not just what I think it can support
- to pull people in, don't force them. Seriously that one is ridiculously useful. Thanks David Gerard.
- that things don't happen unless something makes them happen.
- that other people are smart and cool, and often have good advice
Where she got...
- a habit of learning new skills
- better time-management habits
- an awesome community
- more initiative
- the idea that she can change the world
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
I don't trust my brain's claims of massive utility enough to let it dominate every second of my life. I don't even think I know what, this second, would be doing the most to help achieve a positive singularity.
I'm also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don't worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.