A fun game you can play on LessWrong is to stop just as you are about to click "comment" and make a prediction for how much karma your comment will receive within the next week. This will provide some quick feedback about how well your karma predictors are working. This exercise will let you know if something is broken. A simpler version is to pick from these three distinct outcomes: Positive karma, 0 karma, negative karma.
What other predictors are this easy to test? Likely candidates match one or more of the following criteria:
- Something we do on a regular (probably daily) basis
- An action that has a clear starting point
- Produces quick, quantifiable feedback (e.g. karma, which is a basic number)
- An action that is extremely malleable so we can take our feedback, make quick adjustments, and run through the whole process again
- An ulterior goal other than merely testing our predictors so we don't get bored (e.g. commenting at LessWrong, which offers communication and learning as ulterior goals)
- Something with a "sticky" history so we can get a good glimpse of our progress over time
I already have a simple, accurate method for predicting the karma values of my comments. Agree with EY and get positive karma, or disagree with EY and get negative karma. More generally, support status quo and get positive karma, or reject status quo and get negative karma.
You'd be surprised at the amount of positive karma you can get from a well-phrased criticism of Eliezer, even ones that are relatively content-free. "I'll believe you know what you're talking about when you actually build an AI" is generally well-received.