As rationalists, we should be able to consistently and accurately make predictions that enable us to act effectively.
As humans, we don't. At least not perfectly.
We need to improve. Many of us have, or at least believe we have. However, it's a notably hacked improvement. PredictionBook is an excellent source of feedback on how well we're doing, but there's more detailed information that isn't easily available that I think could be incredibly useful. Questions I would like to see answered are:
- What kinds of predictions are we the least successful at predicting? (weakest calibration, smallest accuracy)
- What kinds of predictions have the most low-hanging fruit? What's the easiest to improve on right now?
- What kinds of predictions are the most useful to us? (accurately predicting a close friend's behavior>predicting obscure political decision)
- Where aren't we making quantitative predictions? Where does our behavior involve predictions that are underrepresented on PredictionBook?
Perhaps a good place to start would be the literature on life satisfaction and happiness. Statistically speaking, what changes in life that can be made voluntarily lead to the greatest increase in life satisfaction at the least cost in effort/money/trouble?