16 types of useful predictions
How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out. And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them. I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about. At this point I should clarify that there are two main goals predictions can help with: 1. Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought). 2. Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time) If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work. But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me. So I think the difficulty in prediction-mak
... By the way, you might've misunderstood the point of the Elon Musk examples. The point wasn't that he's some exemplar of honesty. It was that he was motivated to try to make his companies succeed despite believing that the most likely outcome was failure. (i.e., he is a counterexample to the common claim "Entrepreneurs have to believe they are going to succeed, or else they won't be motivated to try")