I think you are beginning to get the point. :) The key missing fact here is that in fact the resulting math is highly constraining, to the point that if you actually follow it all the way you will be acting in a manner isomorphic to a Bayesian utility-maximizer.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks