Commenting on the fly:
The authors employ the "Ought-Can" principle to defend their assumption that the space of possible worlds should be treated as finite:
Ought-Can: A norm should not demand anything of an agent that is beyond her epistemic reach.
Their argument is essentially this: we (humans) can only divide the space of logical possibilities into finitely many options, so by Ought-Can we do not demand an infinite set of possible worlds.
This is a bit misguided. They should first ask, what is the right answer, cognitive resources be damned? E.g., what is the true probability that an apple will fall on my head tomorrow? Even if this answer is impossible to exactly compute, we need to know that it exists in principle so that we can approximate it in some way. As it stands, they have approximated the true answer, but we don't know what the true answer even looks like, so it's impossible to evaluate how close their approximation is/can be.
(This seems like the sort of mistake you make if you aren't thinking in the back of your head "how would I program an AI to use this epistemology?")
EDIT: At the end of the paper the authors admit that they do need to look into the infinite case so the problem isn't as bad as I initially thought - this paper looks more like tackling a simple case before going after the fully general proof.
We describe three epistemic dilemmas that an agent might face if she attempts to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures.
Huh? I don't have the time to look into this, but are they saying that a quadratic inaccuracy measure is superior to entropy?
Yes, basically they're saying given some reasonable (at least to them) assumptions about what an accuracy measure should look like, the only acceptable measure is quadratic.
They make some arbitrary assumptions about how to represent the space of possible worlds and degrees of belief, and it isn't clear if their result depends on these assumptions (they acknowledge this).
Recently, Hans Leitgeb and Richard Pettigrew have published a novel defense of Bayesianism:
An Objective Defense of Bayesianism I: Measuring Inaccuracy
An Objective Defense of Bayesianism II: The Consequences of Minimizing Inaccuracy
Richard Pettigrew has also written an excellent introduction to probability.