XiXiDu comments on Reframing the Problem of AI Progress - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
I haven't heard of any justification for why it might only take "nine people and a brain in a box in a basement". I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.
Every success in AI so far relied on a huge team. IBM Watson, Siri, Big Dog or the various self-driving cars:
1)
2)
It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?
The basement approach seems ridiculous given the above.
IBM Watson started in a rather small team (2-3 people); IBM started dumping resources on them once they saw serious potential.
I didn't mean to endorse that. What I was thinking when I wrote "hire the most promising AI researchers to do research in secret" was that if there are any extremely promising AI researchers who are convinced by the argument but don't want to give up their life's work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.
My thoughts on what the main effort should be is still described in Some Thoughts on Singularity Strategies.
Inductive inference is "just a math problem". That's the part that models the world - which is what our brain spends most of its time doing. However, it's probably not "one or two deep insights". Inductive inference systems seem to be complex and challenging to build.
Everything is a math problem. But that doesn't mean that you can build a brain by sitting in your basement and literally think it up.
A well-specified math problem, then. By contrast with fusion or space travel.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don't even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.