All of ben_levinstein's Comments + Replies

Yes, although with some subtlety.

Alice is just an expert on rain, not necessarily on the quality of her own epistemic state. (One easier example: suppose your credence initially in rain is .5. Alice's is either .6 or .4. Conditional on it being .6, you become certain it rains. Conditional on it being .4, you become certain it won't rain. You'd obviously use her credences to bet over your own, but you also take her to be massively underconfident.) 

Now, the slight wrinkle here is that the language we used of calibration makes this also seem more "object... (read more)

There are six total worlds:, and

All we get are Alice's credences in rain (given by an inequality), so the only propositions we might learn are (corresponding to non-trivial  propositions), and , and  (corresponding to non-trivial  propositions). Local trust only constrains your reaction to these propositions directly, so it won't require deference on the other 58 events. (Well, 56.)

I don't think there really needs to be anything metaphysically like an expert or a proposition that someone is an expert. It's really about capturing the relationship between a principal's credence and some unknown probability function. Certain possible relationships between the two are interesting, and total trust seems to carve things at a particular joint---thinking the unknown function is more accurate and being willing to outsource decision-making and deferring in an easy-to-capture way that's weaker than reflection,

Interesting post! As a technical matter, I think the notion you want is not reflection (or endorsement) but some version of  Total Trust, where (leaving off some nuance) Agent 1 totally trusts Agent 2 if  for all . In general, that's going to be equivalent to Alice being willing to outsource all decision-making to Bob if she's certain Bob has the same basic preferences she does. (It's also equivalent to expecting Bob to be better on all absolutely continuous strictly proper scoring rules, and a few othe... (read more)

I think the basic approach to commitment for the open-minded agent is right. Roughly, you don't actually get to commit your future-self to things. Instead, you just do what you (in expectation) would have committed yourself to given some reconstructed prior. 

Just as a literature pointer: If I recall correctly, Chris Meacham's approach in "Binding and Its Consequences" is ultimately to estimate your initial credence function and perform the action from the plan with the highest EU according to that function. He doesn't talk about awareness growth, but open-mindedness seems to fit in nicely within his framework (or at least the framework I recall him having). 

6Daniel Kokotajlo
This whole reconstructed-prior business seems fishy to me. Let's presuppose, as people seem to be doing, that there is a clean distinction between empirical evidence and 'logical' or 'a priori' evidence. Such that we can scrub away our empirical evidence and reconstruct a prior, i.e. construct a probability distribution that we would have had if we somehow had zero empirical evidence but all the logical evidence we currently have. Doesn't the problem just recur? Literally this is what I was thinking when I wrote the original commitment races problem post; I was thinking that just 'going updateless' in the sense of acting according to the commitments that make sense from a reconstructed prior, didn't solve the whole problem, just the empirical-evidence flavor of the problem. Maybe that's still progress, of course... And then also there is the question of whether these two kinds of evidence really are that distinct anyway.
6SMK
Thanks. Agreed. Yes, that's a great paper! (I think we might have had a footnote on cohesive decision theory in a draft of this post.) Specifically, I think the third version of cohesive decision theory which Meacham formulates (in footnote 34), and variants thereof, are especially relevant to dynamic choice with changing awareness. The general idea (as I see it) would be that you optimize relative to your ur-priors, and we may understand the ur-prior function as the prior you would or should have had if you had been more aware. So when you experience awareness growth, the ur-priors change (and thus the evaluation of a given plan will often change as well). (Meacham actually applies the ur-prior concept and ur-prior conditionalization to awareness growth in this paper.)