Yeah, I didn't know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn't know "9" was particularly special.
Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy.
I guess I'm not seeing any middle ground between "be honest," and "pick randomization as an action," even for naive CDT where "be honest" gets the problem wrong.
which made me worry that somewhere out there was a method which somehow comes up with 3/4.
Somewhere in Stuart Armstrong's bestiary of non-probabilistic decision procedures you can get an effective 3/4 on the sleeping beauty problem, but I wouldn't worry about it - that bestiary is silly anyhow :P
A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.
Anthropics: why probability isn't enough
This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.