Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.
It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient learning-curve effects on its initial success. There would then be some fraction of the economy directed by the newly engineered process. Would this fraction of the economy inevitably be at a net competitive advantage, or disadvantage, relative to the fraction of the economy which was directed by humans?
If that fraction of the economy would have an advantage, then this would be an example of a general algorithm ultimately superior to all contemporarily-available specialized algorithms. In that case, what you claim to be the core of your argument would be defeated; the strength of your argument would instead have to come from a focus on the reasons why it were improbable that anyone had a relevant chance of ever achieving this kind of software substitute for human strategy and insight (that is, before everyone else was adequately prepared for it to prevent catastrophe), and that even to the point that supposing otherwise deserves to be tarred with a label of "scam". And if the software-directed economy would have a disadvantage even at steady state, then this would be a peculiar fact about software and computing machinery relative to neural states and brains, and it could not be assumed without argument. Digital software and computing machinery both have properties that have made them, in most respects, much more tractable to large returns to scale from purposeful re-engineering for higher performance than neural states and brains, and this is likely to continue to be true into the future.