Wei_Dai comments on Re-understanding Robin Hanson’s “Pre-Rationality” - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (18)
That doesn't seem to work in the specific example I gave. If the "optimistic" AI updates its prior to be whatever the pre-rationality condition says it should be, it will just get back the same prior O, because according to its pre-prior (denoted r in my example), it's actual prior O is just fine, and the reason it's not pre-rational is that in the counterfactual case where the B coin landed tails, it would have gotten assigned the prior P.
Or am I misinterpreting your proposed solution? (ETA: Can you make your solution formal?)