So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
- that other people see themselves differently, and should be understood on their terms (mostly from here)
- that I can pay attention to what I'm doing, and try to notice patterns to make intervention more effective.
- the whole utilitarian structure of having a goal that you take actions to achieve, coupled with the idea of an optimization process. It was really helpful to me to realize that you can do whatever it takes to achieve something, not just what has been suggested.
- the importance/usefulness of dissolving the question/how words work (especially great when combined with previous part)
- that an event is evidence for something, not just what I think it can support
- to pull people in, don't force them. Seriously that one is ridiculously useful. Thanks David Gerard.
- that things don't happen unless something makes them happen.
- that other people are smart and cool, and often have good advice
Where she got...
- a habit of learning new skills
- better time-management habits
- an awesome community
- more initiative
- the idea that she can change the world
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don't trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don't trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The St. Petersburg Paradox
The Infinitarian Challenge to Aggregative Ethics
Omohundro's "Basic AI Drives" and Catastrophic Risks
I take risks when I actually have a grasp of what they are. Right now I'm trying to organize a DC meetup group, finish up my robotics team's season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing a... (read more)