It's complicated, but here's one thought...
Notice that one of my example papers was a paper of objections to CEV. Right now we're at the stage of making the arguments and concepts in play formalized enough that they can be defended or attacked rigorously. If somebody formalizes and clarifies an argument well enough to properly attack, they've done at least half our work for us.
Seems quite reasonable. But I don't have a clear picture of your general strategy. Do you have a path (read: a likely conjunction of paths) to getting a world-class mathematician to take an interest in forming a new decision theory? Talking about the details of CEV seems premature to me if we don't know that certain kinds of extrapolation are theoretically possible.
Series: How to Purchase AI Risk Reduction
I recently explained that one major project undergoing cost-benefit analysis at the Singularity Institute is that of a scholarly AI risk wiki. The proposal is exciting to many, but as Kaj Sotala points out:
Indeed. So here is another thing that donations to SI could purchase: good research papers by skilled academics.
Our recent grant of $20,000 to Rachael Briggs (for an introductory paper on TDT) provides an example of how this works:
For example, SI could award grants for the following papers:
(These are only examples. I don't necessarily think these particular papers would be good investments.)