So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
- that other people see themselves differently, and should be understood on their terms (mostly from here)
- that I can pay attention to what I'm doing, and try to notice patterns to make intervention more effective.
- the whole utilitarian structure of having a goal that you take actions to achieve, coupled with the idea of an optimization process. It was really helpful to me to realize that you can do whatever it takes to achieve something, not just what has been suggested.
- the importance/usefulness of dissolving the question/how words work (especially great when combined with previous part)
- that an event is evidence for something, not just what I think it can support
- to pull people in, don't force them. Seriously that one is ridiculously useful. Thanks David Gerard.
- that things don't happen unless something makes them happen.
- that other people are smart and cool, and often have good advice
Where she got...
- a habit of learning new skills
- better time-management habits
- an awesome community
- more initiative
- the idea that she can change the world
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?
Most of all it just made me sad and depressed. The whole "expected utility" thing being the worst part. If you take it seriously you'll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you'll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won't be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There's always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don't believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you're wrong you'll lose more than by trying to figure out if you're wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It's like you'll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
That presumes no time discounting.
Time discounting is neither rational nor irrational. It's part of the way one's utility function is defined, and judgements of instrumental rationality can only be made by reference to a utility function. So there's not necessarily any conflict between expected utility maximization and having fun now: indeed, one could even have a utility function that only cared about things... (read more)