Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning;
That seems unlikely. Leading both?
extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.
Mediocrity is sufficient to push them entirely out of the DK gap; your thinking DK applies is just another example of what I mean by these being fragile easily over-interpreted results.
(Besides blatant misapplication, please keep in mind that even if DK had been verified by meta-analysis of dozens of laboratory studies, which it has not, that still only gives a roughly 75% chance that the effect applies outside the lab.)
The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.
Without specifics, one cannot argue against that.
If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed - but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse.
So you're just engaged in reference class tennis. ('No, you're wrong because the right reference class is magicians!')
What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn't be a vastly greater explanation for the SI behavior...
Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.