The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, "an upload of John von Neumann", taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn't need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you'd just have to raise the first one yourself and then start making copies of him.
Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.