jsteinhardt comments on Beyond Bayesians and Frequentists - Less Wrong

36 Post author: jsteinhardt 31 October 2012 07:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 31 October 2012 07:57:19AM 11 points [-]

I haven't read this in detail but one very quick comment: Cox's Theorem is a representation theorem showing that coherent belief states yield classical probabilities, it's not the same as the dutch-book theorem at all. E.g. if you want to represent probabilities using log odds, they can certain relate to each other coherently (since they're just transforms of classical probabilities), but Cox's Theorem will give you the classical probabilities right back out again. Jaynes cites a special case of Cox in PT:TLOS which is constructive at the price of assuming probabilities are twice differentiable, and I actually tried it with log odds and got the classical probabilities right back out - I remember being pretty impressed with that, and had this enlightenment experience wherein I went to seeing probability theory as a kind of relational structure in uncertainty.

I also quickly note that the worst-case scenario often amounts to making unfair assumptions about "randomization" wherein adversaries can always read the code of deterministic agents but non-deterministic agents have access to hidden sources of random numbers. E.g. http://lesswrong.com/lw/vq/the_weighted_majority_algorithm/

Comment author: jsteinhardt 31 October 2012 08:21:55AM *  3 points [-]

Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?

I'm not sure exactly how your comment about deterministic vs. non-deterministic agents is meant to apply to the arguments I've advanced here (although I suppose you will clarify after you're done reading).

Separately, I disagree that the assumptions are unfair; I think of it as a particularly crisp abstraction of the actual situation you care about. As long as pseudo-random generators exist and you can hide your source of randomness, you can guarantee that no adversary can predict your random bits; if you could usefully make the same guarantee about other aspects of your actions without recourse to a PRG then I would happily incorporate that into the set of assumptions, but in practice it is easiest to just work in terms of a private source of randomness. Besides, I think that the use of this formalism has been amply validated by its intellectual fruits (see the cited network flow application as one example, or the Arora, Hazan, and Kale reference).

Comment author: Jayson_Virissimo 31 October 2012 08:34:43AM 6 points [-]

Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?

There is a whole class of dutch book arguments, so I'm not sure which one you mean by the dutch book argument.

In any case, Susan Vineberg's formulation of the Dutch Book Theorem goes like this:

Given a set of betting quotients that fails to satisfy the probability axioms, there is a set of bets with those quotients that guarantees a net loss to one side.

Comment author: jsteinhardt 31 October 2012 04:33:42PM 0 points [-]

Yes, that is the one I had in mind. Thanks!

Comment author: Eliezer_Yudkowsky 01 November 2012 04:38:49AM 4 points [-]

Then you might think you could have inconsistent betting prices that would harm the person you bet with, but not you, which sounds fine.

Rather: "If your betting prices don't obey the laws of probability theory, then you will either accept combinations of bets that are sure losses, or pass up combinations of bets that are sure gains."