could win $1 billion, and I am risk and time neutral
Who has constant marginal utility of money up to $1,000,000,000?
The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up
Consequences can't be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.
It cost roughly $100 million to launch a big search for asteroids that has now located 90%+ of large (dinosaur-killer size) asteroids, and such big impacts happen every hundred million years or so, accompanied by mass extinctions, particularly of large animals. If working on AI, before AI is clearly near and better understood, had a lower probability of averting x-risk reduction per unit cost than asteroid defense, or adding to the multibillion dollar annual anti-nuclear proliferation or biosecurity budgets, or some other intervention, then it would lose.
"Some nonzero chance" isn't enough, it has to be a "chance per cost better than the alternatives."
Who has constant marginal utility of money up to $1,000,000,000?
Someone who's already got many billions? (But then again, for such a person a 1/1000 chance of getting one more billion wouldn't even be worth the time spent to participate in such a lottery, I suppose.)
Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.