StRev comments on A summary of Savage's foundations for probability and utility. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (88)
Well, it's clearly pathological in some sense, but the space of actions to be (pre)ordered is astronomically big and reflective endorsement is slow, so you can't usefully error-check the space that way. cf. Lovecraft's comment about "the inability of the human mind to correlate all its contents".
I don't think it will do to simply assume that an actually instantiated agent will have a transitive set of expressed preferences. Bit like assuming your code is bugfree.