Kenny2

Professor of Logic and Philosophy of Science, UC Irvine

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Kenny220

The example is really helpful for me getting a concrete understanding of what it looks like to satisfy Trust without Reflection, and why that goes along with deferring to someone else for decisions - but I don't see what this example of Alice has to do with locality. It looks like the only relevant propositions are whether it rains tomorrow, and what Alice's credences are, and there don't seem to be any propositions we don't defer to her on.

Kenny220

Nice explanation of the paper!

I really like the trust principle in the paper, about what we can say about the relationship between credence functions when one person would prefer to use another person's credences than their own. But I'm skeptical about the concept that seems to initially motivate it, namely, the idea that some people might actually be experts. Does any of this depend on there being such a proposition, or can we do it all in a language without such propositions?

Kenny232

I think this shows clearly that dynamics don't always lead to the same things as equilibrium rationality concepts. If someone is already convinced that the dynamics matter, this leads naturally to the thought that the equilibrium concepts are missing something important. But I think that at least some discussions of rationality (including some on this site) seem like they might be committed to some sort of "high road" idea under which it really is the equilibrium concept that is core to rationality, and that dynamics were at best a suggestive motivation. (I think I see this in some of the discussions of something like functional decision theory as "that decision theory that a perfectly rational agent would opt to self-program", but with the idea that you don't actually need to go through some process of self-re-programming to get there.)

Is there an argument to convince those people that the dynamics really are relevant to rationality itself, and not just to predictions of how certain naturalistic groups of limited agents will come to behave in their various local optima?