Unfortunately, yes.
If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.
Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.
Not particularly, no. In fact, there probably is no such method - either the parties must agree to disagree (which they could honestly do if they're not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.
Fixed the formatting.
Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one's values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.
Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I'm talking about the hoomun world, not physics and chemistry).
I've even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you've completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.
This is probably a bit longer than 100 words, but I'm posting it here and not in the open thread because I have no idea if it's of any value whatsoever.
What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems - e.g. rationalism and empiricism, or skepticism and "faith" - double crux should, if used correctly, eventually lead all disagreements back to epistemology, at which point... what, exactly? Use double-crux again? What if the parties don't have a meta-epistemological system in common, or indeed, any nth-order epistemological system in common? Double crux sounds really useful, and this is a great post, but a system for resolving epistemological disputes would be extremely helpful as well (especially for those of us who regularly converse with "faith"-ists about philosophy).
This is an interesting idea, although I'm not sure what you mean by
It can work without people understanding why it works
Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.
Moved it to the top.
Can anybody point me to some specific examples of this type of evolution? I'm a complete layman when it comes to biology, and this fascinates me. I'm having a bit of a hard time imagining such a process, though.