Understanding Newcomb's problem (http://acritch.com/deserving-trust/) especially made me see a bunch of ways I had been 2-boxing. I often tried to approximate the best input instead of maximizing the expectation of my algorithm, or kept trying to recompute the best option (with varying outputs depending on my mood). For example:
1. Juggling lists of rules for socializing better in various contexts. 1-boxing is trying to maximize my own fun.
2. Feeling guilty/unhappy staying with things because there are/will be other options, and I probably didn't pick the best one. Stuff like what math to learn, what self-improvement to do, what girl to date. Now that making a choice... (read more)
S Tier:
Timeless decision theory for humans.
Understanding Newcomb's problem (http://acritch.com/deserving-trust/) especially made me see a bunch of ways I had been 2-boxing. I often tried to approximate the best input instead of maximizing the expectation of my algorithm, or kept trying to recompute the best option (with varying outputs depending on my mood). For example:
1. Juggling lists of rules for socializing better in various contexts. 1-boxing is trying to maximize my own fun.
2. Feeling guilty/unhappy staying with things because there are/will be other options, and I probably didn't pick the best one. Stuff like what math to learn, what self-improvement to do, what girl to date. Now that making a choice... (read more)