Johnicholas comments on What a reduction of "could" could look like - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (103)
Following is a generalization to take into account logical uncertainty (inability to prove difficult statements, which can lead in particular to inability to prove any statements of the necessary kind).
Instead of proving statements of the form "agent()==A implies world()==U", let's prove statements of the form "(X and agent()==A) implies world()==U", where X are unsubstantiated assumptions. There are obviously provable statements of the new form. Now, instead of looking for a statement with highest U, let's for each A look at the sets of pairs (X,U) for proved statements. We can now use the inverse of length of X to weight the corresponding utilities and estimate the "expected utility" of performing action A by summing over all explored X, even if we are unable to complete a single proof of the statements of the original form. For the same reason as in the post, this "expected utility" calculation is valid for the action that we choose.
(This looks like a potentially dangerous algorithm.)
Edit: As cousin_it points out, it doesn't quite work as intended, because X could just be "false", which would enable arbitrarily high U and collapse the "expected utility" calculation.
If you sum over all explored X (and implicitly treat unexplored X as if they are zero probability) then you can be arbitrarily far from optimal Bayesian behavior.
See Fitelson's "Bayesians sometimes cannot ignore even very implausible theories."
http://fitelson.org/hiti.pdf
Thanks for that link. Good stuff.