This is a terrible misrepresentation. SI does not argue for donations on these grounds; Eliezer and other SI staff have explicitly rejected such Pascalian reasons, but instead argued that the risks that they wish to avert are quite probable.
could win $1 billion, and I am risk and time neutral
Who has constant marginal utility of money up to $1,000,000,000?
The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up
Consequences can't be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.
It cost roughly $100 million to launch a big search for asteroids that has now located 90%+ of large (dinosaur-killer size) asteroids, and such big impacts happen every hundred million years or so, accompanied by mass extinctions, particularly of large animals. If working on AI, before AI is clearly near and better understood, had a lower probability of averting x-risk reduction per unit cost than asteroid defense, or adding to the multibillion dollar annual anti-nuclear proliferation or biosecurity budgets, or some other intervention, then it would lose.
"Some nonzero chance" isn't enough, it has to be a "chance per cost better than the alternatives."
Seems like there is more going on than just "Do transhumanists endorse Pascalian bargains?" Because of confidence levels inside and outside an argument, the fact that someone (e.g. SI) makes an argument that a particular risk has a non-negligible probability does not mean that someone examining this claim should assign a non-negligible probability. It's very possible for someone thinking about (e.g.) AI risk to assign low probabilities and thus find themselves in a Pascalian situation even if SI argues that the probability of AI risk is high.
It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient ...
I don't think Nick made a good case as to why these movements / belief systems deserve to be call "scams" and more importantly deserve to be ignored (in favor of "spending more time learning about what has actually happened in the real world"). The fact that certain hopes and threats share certain properties (which Nick numbered 1-3 in his post) is unfortunate, but I didn't find any convincing arguments in his post showing why these hopes and threats should therefore be ignored.
(My overall position, which I'll repeat in case anyone is c...
Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.