Inner and Outer Alignment Failures in current forecasting systems
Inner alignment problems (related to not aligning forecasts with "optimize your comparative predictive accuracy" enough):
- Discrete prizes for top positions, either in terms of money or in terms of prestige, lead to more extreme forecasts
- Large fees and capped amounts in PredictIt means its not worth it to correct their prediction markets when they are wrong
Outer alignment failures (related to aligning forecasts with "optimize your comparative predictive accuracy" too much, instead of a better objective):
- Forecasters mislead other system participants with, e.g., fake polls
- Forecasters selectively pick easier questions (picking harder questions reduces your predictive accuracy)
- Forecasting systems have tricky difficulties with fixed-point problems (e.g., forecasts about elections affect elections, forecasts about Ebola spread affect measures taken to contain Ebola)
- Forecasters are incentivized to copy the community, rather than add new information
- Tricky problems with self-fulfilling prophecies (if Warren Buffet predicts a stock goes up, it probably goes up)
- Forecasting systems are incentivized to make a prediction and then make it happen (e.g., predicting a company is going to be more valuable because one intends to buy it and dispose of non-productive assets, predict a terrorist attack and then make it happen)
- Forecasting systems are incentivized to make the world more uniform in order to more easily predict it (e.g., make the population have surnames and identification numbers instead of local nicknames so that it's easier to conscript and tax them)