Comment author: [deleted] 22 July 2009 09:55:27PM *  4 points [-]

(If this post is too long, read only the last paragraph.)

Evidence that regards statements. I guess the "regarding statements" bit was redundant. Anyway, let me try to give some examples.

First, let me postulate a guy named Delta. Delta is an extremely rational robot who, given the evidence, always comes up with the best possible conclusion.

Andy the Apathetic is presented with a court case. Before he ever looks at the case, he decides that the probability the defendant is guilty is 50%. In fact, he never looks at the case; he tosses it aside and gives that 50% as his final judgement. Andy is rational-neutral, as he discarded evidence regardless of its direction; his probability is useless, but if I told Delta how Andy works and Andy's final judgement, Delta would agree with it.

Barney the Biased is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he decides to discard everything suggesting that the defendant is innocent; he concludes that the defendant has a 99.99% chance of being guilty and gives that as his final judgement. Barney is not rational-neutral, as he discarded evidence with regard to its direction; his probability is almost useless (but not as useless as Andy's), and if I told Delta how Barney works and Barney's final judgement, Delta might give a probability of only 45%.

Finally, Charlie the Careful is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he takes absolutely everything into account, running the numbers and keeping Bayes' law between his eyes at all times; eventually, after running a complete analysis, he decides that the probability that the defendant is guilty is 23.14159265%. Charlie is rational-neutral, as he discarded evidence regardless of its direction (in fact, he discarded no evidence); if I told Delta how Charliie works and Charlie's final judgement, Delta would agree with it.

So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code, it's impossible to come up with a function that takes one of your probability estimates and returns a better probability estimate.

In response to comment by [deleted] on Deciding on our rationality focus
Comment author: ilyas 22 July 2009 11:55:12PM 1 point [-]

It might be useful to revise this concept to account for computational resources (see AI work on 'limited rationality', e.g. Russell and Wefald's "Do the Right Thing" book).