My experience with applied machine learning is strictly undergraduate-level modulo a little tinkering and a little industry experience, so these impressions might be quite unlike those of an actual specialist, but my sense is that while it comes up with a lot of interesting stuff that might potentially be useful in making a hypothetical AGI, it ultimately isn't that interested in generalizing outside domain-specific approaches and that limits its bandwidth to a large extent.
Machine-learning algorithms are treated as -- not exactly a black box, but pretty well distinguished from the task-level inputs and outputs. For example, you might have a pretty clever neural-network variation that no one's ever used before, but most of the actual work in the associated project is probably going to go into highly specialized preprocessing to render down inputs into an easily digestible form. And that's going to do you exactly no good at all if you want to use the same technique on a different class of inputs.
(This can be a little irritating for non-AI people too, by the way. An old coworker of mine has a lengthy rant about how all the dominant algorithms for a particular application permute the inputs in all kinds of fantastically clever ways but then end with "and then we feed it into a neural network".)
I would agree that the specific applications that machine learning generally pursues are useless for general AI, but the general theory that they develop and use (e.g. probabilistic networks, support vector machines, various clustering techniques, etc. etc....) seems like something that AGI would eventually be built on. Of course, the narrow applications get more funding than the general theory, but that's how it always is. My knowledge/experience of ML is probably even less than yours, though.
I have this (OpenCog-influenced) mental image of a superintelli...
I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.