Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
You are viewing a comment permalink. View the original post to see all
comments and the full post content.
You are viewing a single comment's thread.
Nick Tarleton: Not sure I entirely correctly understood your suggestion, need to think about it more.
However, my initial thought is that it may require/assume logical omnicience.
ie, what of updating based on "subjective guesses" of which worlds are consistent or inconsistent with the data. That is, as consistent as you can tell, given bounded computational resources. I'm not sure, but your model, at least at first glance, may not be able to say useful stuff about those that are not logically ominicent.
Also, I'm unclear, could you clarify what it is you'd be using a multiset for? Do you mean "increase measure only by increasing number of copies of this in the multiset, and no other means allowed" or did you intend something else?
(incidentally, I think I do prefer coherence/dutch book/vulnerability style constructions of epistemic probability. Especially the ones that build up decision theory along the way, so one ends up starting with utilities almost. Such have very much of a "mathematical karma" flavor, as I've expressed elsewhere.)
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?