prase comments on Is Rationality Maximization of Expected Value? - Less Wrong

-23 Post author: AnlamK 22 September 2010 11:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 23 September 2010 07:09:48AM 1 point [-]

Although I probably agree with your point, the chosen formulation is weird. The uncertainty is hidden in the probability, "uncertain probabilities" is sort of pleonasm.

I did spend some time thinking about exactly what this means after writing it. It seems to me there is a meaningful sense in which probabilities can be more or less uncertain and I haven't seen it well dealt with by discussions of probability here. If I have a coin which I have run various tests on and convinced myself it is fair then I am fairly certain the probability of it coming up heads is 0.5. I think the probability of the Republicans gaining control of Congress in November is 0.7 but I am less certain about this probability. I think this uncertainty reflects some meaningful property of my state of knowledge.

I tentatively think that this sense of 'certainty' reflects something about the level of confidence I have in the models of the world from which these probabilities derive. It also possibly reflects something about my sense of what fraction of all the non-negligibly relevant information that exists I have actually used to reach my estimate. Another possible interpretation of this sense of certainty is a probability estimate for how likely I am to encounter information in the future which would significantly change my current probability estimate. A probability I am certain about is one I expect to be robust to the kinds of sensory input I think I might encounter in the future.

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty. In the context of the original comment, low probabilities are associated with rare events and as such are the kinds of thing we might expect to have a very incomplete model of or a very sparse sampling of relevant data for. They are probabilities which we might expect to easily double or halve in response to the acquisition of a relatively small amount of new sensory data.

Perhaps it's as simple as how much you update when someone offers to make a bet with you. If you suspect your model is incomplete or you lack much of the relevant data then someone offering to make a bet with you will make you suspect they know something you don't and so update your estimate significantly.

Comment author: prase 23 September 2010 07:35:39AM 0 points [-]

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty.

I don't think the key issue is the imperfect Bayesianism of humans. I suppose that the discussed certainty of a probability has a lot to do with its dependence on priors - the more sensitive the probability is to change in priors we find arbitrary, the less certain it feels. Priors themselves feel most uncertain, while probabilities obtained from evidence-based calculations, especially those quasi-frequentist probabilities, as P(heads in next flip), depend on many priors and change in any single prior doesn't move them too far. Perfect Bayesians may not have the feeling, but still have priors.

Comment author: Will_Sawin 25 September 2010 12:14:05AM 0 points [-]

Sensitivity to priors is the same as sensitivity to new evidence. And when we're sensitive to new evidence, our estimates are likely to change, which is another reason they're uncertain.

The reason this phenomena occurs is because we are uncertain about some fundamental frequency, or a model more complex than a simple frequency model, and probability(heads|frequency of heads is x)=x.

Comment author: mattnewport 23 September 2010 05:50:53PM 0 points [-]

I think there's something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).

Comment author: prase 24 September 2010 10:43:37AM 0 points [-]

Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.

A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes' theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:

  • If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn't try to find too hard), and
  • I am not sure whether such an approach isn't obsolete in the light of discussions about updateless decision theories and similar stuff.
Comment author: mattnewport 24 September 2010 05:53:53PM 0 points [-]

Do you think the chain of reasoning is infinite?

Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual's lifetime or from millions of years of evolution baking in some 'hard-coded' priors to the human brain.

When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary 'learning' together with a lifetime of actual learning and assign a single real number to it and call it a 'prior' but this is just a way of making calculation tractable.