Could you say a bit more on why you think we should quantify the accuracy of credences with a strictly proper scoring rule, without reference to optimality proofs? I was personally confused about what principled reasons we had to think only strictly proper scoring rules were the only legitimate measures of accuracy, until I read Levinstein's paper offering a pragmatic vindication for such rules.
I enjoyed this post. I think the dialogue in particular nicely highlights how underdetermined the phrase 'becoming more Bayesian' is, and that we need more research on what optimal reasoning in more computationally realistic environments would look like.
However, I think there are other (not explicitly stated) ways I think Bayesianism is helpful for actual human reasoners. I'll list two:
This was my reconstruction of Caspar's argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn't going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.
I agree with you that choosing the appropriate set of actions is a non-trivial task, and I've said nothing here about how Kantians would choose an appropriate class of actions.
I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might bec...
On my current understanding of this post, I think I have a criticism. But I'm not sure if I properly understand the post, so tell me if I'm wrong in my following summary. I take the post to be saying something like the following:
'Suppose, in fact, I take the action A. Instead of talking about logical counterfactuals, we should talk about policy-dependent source code. If we do this, then we can see that initial talk about logical counterfactuals encoded an error. The error is not understanding the following claim: when asking what would have...
Maybe the qualitative components of Bayes' theorem are, in some sense, pretty basic. If I think about how I would teach the basic qualitative concepts encoded by Bayes' theorem (which we both agree are useful), I can't think of a better way than through directly teaching Bayes' theorem. That is the sense in which I think Bayes' theorem offers a helpful precisification of these more qualitative concepts: it imposes a useful pedagogical structure into which we can neatly fit such principles.
You claim that the increased precision affo... (read more)