Johnicholas comments on What a reduction of "could" could look like - Less Wrong

53 Post author: cousin_it 12 August 2010 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 12 August 2010 06:51:15PM *  0 points [-]

Following is a generalization to take into account logical uncertainty (inability to prove difficult statements, which can lead in particular to inability to prove any statements of the necessary kind).

Instead of proving statements of the form "agent()==A implies world()==U", let's prove statements of the form "(X and agent()==A) implies world()==U", where X are unsubstantiated assumptions. There are obviously provable statements of the new form. Now, instead of looking for a statement with highest U, let's for each A look at the sets of pairs (X,U) for proved statements. We can now use the inverse of length of X to weight the corresponding utilities and estimate the "expected utility" of performing action A by summing over all explored X, even if we are unable to complete a single proof of the statements of the original form. For the same reason as in the post, this "expected utility" calculation is valid for the action that we choose.

(This looks like a potentially dangerous algorithm.)

Edit: As cousin_it points out, it doesn't quite work as intended, because X could just be "false", which would enable arbitrarily high U and collapse the "expected utility" calculation.

Comment author: Johnicholas 13 August 2010 03:48:39AM 1 point [-]

If you sum over all explored X (and implicitly treat unexplored X as if they are zero probability) then you can be arbitrarily far from optimal Bayesian behavior.

See Fitelson's "Bayesians sometimes cannot ignore even very implausible theories."

http://fitelson.org/hiti.pdf

Comment author: cousin_it 13 August 2010 05:37:25AM *  0 points [-]

Thanks for that link. Good stuff.