Simulation_Brain comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 15 August 2010 10:31:10PM *  4 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Comment author: Simulation_Brain 18 August 2010 06:20:49AM 3 points [-]

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Comment author: timtyler 18 August 2010 06:31:35AM *  2 points [-]

Machine intelligence has surpassed "human level" in a number of narrow domains. Already, humans can't manipulate enough data to do anything remotely like a search engine or a stockbot can do.

The claim seems to be that in narrow domains there are often domain-specific "tricks" - that wind up not having much to do with general intelligence - e.g. see chess and go. This seems true - but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.

Those who make a big deal about the distinction between their projects and "mere" expert systems are probably mostly trying to market their projects before they are really experts at anything.

One of my videos discusses the issue of whether the path to superintelligent machines will be "broad" or "narrow":

http://alife.co.uk/essays/on_general_machine_intelligence_strategies/

Comment author: JoshuaZ 18 August 2010 03:28:59PM 0 points [-]

Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I'm wrong but I'm under the impression that

1) neutral networks cannot in general detect connected components unless the network has some form of recursion. 2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.

This is the sort of thing I was thinking of. Am I wrong about 1 or 2?

Comment author: Simulation_Brain 20 August 2010 08:58:47PM 1 point [-]

Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there's no concern with recurrent loops.

On the whole, neural net research has slowed dramatically based on the common view you've expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.

Comment author: JoshuaZ 21 August 2010 02:47:12PM 0 points [-]

Thanks. GeneRec sounds very interesting. Will take a look. Regarding 1, I was thinking of something like the theorems in chapter 9 in Perceptrons which shows that there are strong limits on what topological features of input a non-recursive neural net can recognize.