atucker comments on Q: What has Rationality Done for You? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (88)
I don't trust my brain's claims of massive utility enough to let it dominate every second of my life. I don't even think I know what, this second, would be doing the most to help achieve a positive singularity.
I'm also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don't worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don't trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don't trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The St. Petersburg Paradox
The Infinitarian Challenge to Aggregative Ethics
Omohundro's "Basic AI Drives" and Catastrophic Risks
I take risks when I actually have a grasp of what they are. Right now I'm trying to organize a DC meetup group, finish up my robotics team's season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
The issue with long-shots like this is that I don't know where to look for them. Seriously. And since they're such long-shots, I'm not sure how to go about getting them. I know that trying to do so isn't particularly likely to work.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I'm not going to be agonizing over everything I could have possibly done better.