First, don't stand up. ;)

Okay. So what I'm hoping to do in this mini sequence is to introduce a basic argument for Bayesian Decision Theory and epistemic probabilities. I'm going to be basing it on dutch book arguments and Dr. Omohundro's vulnerability based argument, however with various details filled in because, well... I myself had to sit and think about those things, so maybe it would be useful to others too. For that matter, actually writing this up will hopefully sort out my thoughts on this.

Also, I want to try to generalize it a bit to remove the explicit dependancy of the arguments on resources. (Though I may include arguments from that to illustrate some of the ideas.)

Anyways, the spirit of the idea is "don't be stupid." "Don't AUTOMATICALLY lose when there's a better alternative that doesn't risk you losing even worse."

More to the point, repeated application of that idea is going to let us build up the mathematics of decision theory. My plan right now is for each of the posts in this sequence to be relatively short, discussing and deriving one principle or (a couple of related principles) of decision theory and bayesian probability at a time from the above. The math should be pretty simple, with the very worst being potentially a tiny bit of linear algebra. I expect the nastiest bit of math will be one instance of matrix reduction down the line. Everything else ought to be rather straightforward, showing the mathematics of decision theory to be a matter of, as Mr. Smith would say, "inevitability."

Consider this whole sequence a work in progress. If anyone thinks any partcular bits of it could be rewritted more clearly, please speak up! Or at least type up. (But of course, don't stand up. ;))

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]Cyan10

I'm really looking forward to this series.

Thanks! Hopefully it will actually be as promised and not an exercise in bad writing. (any pain delivered by my posts should only be as intended, via things like corny titles. ;))

What do people here think of the argument from expected epistemic utility?

Updating your beliefs is not reflectively consistent. Roughly it is, but not precisely. You have to keep caring about counterfactuals, circumstances reached by inversion of any to every observation, so you can't just throw away the prior where it was opposed by evidence.

Sounds good - as a skeptic of bayesian decision theory, I'll be interested to read it.