Artificial Intelligence dates back to 1960. Fifty years later it has failed in such a humiliating way that it was not enough to move the goal posts; the old, heavy wooden goal posts have been burned and replaced with light weight portable aluminium goal posts, suitable for celebrating such achievements as from time to time occur.
Mainstream researchers have taken the history on board and now sit at their keyboards typing in code to hand-craft individual, focused solutions to each sub-challenge. Driving a car uses drive-a-car vision. Picking a nut and bolt from a component bin has nut-and-bolt vision. There is no generic see-vision. This kind of work cannot go FOOM for deep structural reasons. All the scary AI knowledge, the kind of knowledge that the pioneers of the 1960's dreamed of, stays in the brains of the human researchers. The humans write the code. Though they use meta-programming, it is always "well-founded" in the sense that level n writes level n-1, all the way down to level 0. There is no level n code rewriting level n. That is why it cannot go FOOM.
Importantly, this restraint is enforced by a different kind of self-interest than avoiding existential risk. The researchers have no idea how to write code with level n re-writing level n. Well, maybe they have the old ideas that never came close to working, but they know that if they venture into that toxic quagmire they will have nothing to show before their grant runs out, funders will think they wasted their grant on quixotic work, and their careers will be over.
Obviously past failure can lead to future success. Even a hundred and fifty years of failure can be trumped by eventual success. (Think of steam car work, which finally succeeded with the Stanley steamer, only to elbowed aside by internal combustion). So it is fair enough for the SI to say that past failure does not in itself rule out an AI-FOOM. But you cannot just ditch the history as though it never happened. We have learned a lot, most of it about how badly humans suck at programming computers. Current ideas of AI-risk are too thin to be taken seriously because there is no engagement with the history - researchers are working within a constraining paradigm because the history has dumped them in it, but the SI isn't worrying about how secure those constraints are, it is oblivious to them.
I was wondering - what fraction of people here agree with Holden's advice regarding donations, and his arguments? What fraction assumes there is a good chance he is essentially correct? What fraction finds it necessary to determine whenever Holden is essentially correct in his assessment, before working on counter argumentation, acknowledging that such investigation should be able to result in dissolution or suspension of SI?
It would seem to me, from the response, that the chosen course of action is to try to improve the presentation of the argument, rather than to try to verify truth values of the assertions (with the non-negligible likelihood of assertions being found false instead). This strikes me as very odd stance.
Ultimately: why SI seems certain that it has badly presented some valid reasoning, rather than tried to present some invalid reasoning?
edit: I am interested in knowing why people agree/disagree with Holden, and what likehood they give to him being essentially correct, rather than a number or a ratio (that would be subject to selection bias).