The immediate effects seem pretty darn beneficial and hard to beat. The end of the first world war with a Central powers victory basically changes the balance of power making Britain a second rate force decades earlier, preventing a Soviet rise to power, America remains much more isolationist... I have a very hard time seeing anything like a part two to that struggle.
A Stalin like figure might decide to try and invade central Europe but this seems unlikely. An American-Japanese war is still possible, but it seems unlikely to involve a European theatre in itself.
Did I mention we avoid the fucking holocaust and keep large chunks of Eastern Europe away and safe from Bolsheviks at least during their most damaging and bloodthirsty years? It just seems so overwhelmingly likely that this is a better world that I'm mystified why anyone would think it very likley to be worse.
Reducing the relevance of a global Communist vs. Capitalist struggle narrative also seems to much reduce the possibility of global annihilation. Maybe nukes are used in one or two ill though out wars, but then again nukes where used in our time line in a ill thought out war and we didn't turn out so bad.
Ah! You meant German victory! (or, at least, whatever the German masses would accept as victory or an honorable draw); I hadn't even thought about that; my first reaction was "So the French and the British bleed Germany dry on their own and impose an even harsher treaty", and I just went on from there. I'll have to consiider the above in detail, thanks a lot. This indeed sounds agreeable.
I'm skeptical about trying to build FAI, but not about trying to influence the Singularity in a positive direction. Some people may be skeptical even of the latter because they don't think the possibility of an intelligence explosion is a very likely one. I suggest that even if intelligence explosion turns out to be impossible, we can still reach a positive Singularity by building what I'll call "modest superintelligences", that is, superintelligent entities, capable of taking over the universe and preventing existential risks and Malthusian outcomes, whose construction does not require fast recursive self-improvement or other questionable assumptions about the nature of intelligence. This helps to establish a lower bound on the benefits of an organization that aims to strategically influence the outcome of the Singularity.
(To recall what the actual von Neumann, who we might call MSI-0, accomplished, open his Wikipedia page and scroll through the "known for" sidebar.)
Building a MSI-1 seems to require a total cost on the order of $100 billion (assuming $10 million for each clone), which is comparable to the Apollo project, and about 0.25% of the annual Gross World Product. (For further comparison, note that Apple has a market capitalization of $561 billion, and annual profit of $25 billion.) In exchange for that cost, any nation that undertakes the project has a reasonable chance of obtaining an insurmountable lead in whatever technologies end up driving the Singularity, and with that a large measure of control over its outcome. If no better strategic options come along, lobbying a government to build MSI-1 and/or influencing its design and aims seems to be the least that a Singularitarian organization could do.