Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Houshalter 27 June 2015 08:00:00AM *  1 point [-]

The outside view is not very good for predicting technology. Every technology has an eternity of not existing, until suddenly one day it exists out of the blue.

Now no one is saying that deep learning is going to be AGI in 10 years. In fact the deep learning experts have been extremely skeptical of AGI in all forms, and are certainly not promoting that view. But I think it's a very reasonable opinion that it will lead to AGI within the next few decades. And I believe sooner rather than later.

The reasons that 'this time it is different':

  • NNs are extraordinarily general. I don't think you can say this about other AI approaches. I mean search and planning algorithms are pretty general. But they fall back on needing heuristics to shrink the search space. And how do you learn heuristics? It goes back to being a machine learning problem. And they are starting to solve it. E.g. a deep neural net predicted Go moves made by experts 54% of the time.

  • The progress you see is a great deal due to computing power advances. Early AI researchers were working with barely any computing power, and a lot of their work reflects that. That's not to say we have AGI and are just waiting for computers to get fast enough. But computing power allows researchers to experiment and actually do research.

  • Empirically they have made significant progress on a number of different AI domains. E.g. vision, speech recognition, natural language processing, and Go. A lot of previous AI approaches might have sounded cool in theory, or worked on a single domain, but they could never point to actual success on loads of different AI problems.

  • It's more brain like. I know someone will say that they really aren't anything like the brain. And that's true, but at a high level there are very similar principles. Learning networks of features and their connections, as opposed to symbolic approaches.

And if you look at the models that are inspired by the brain like HTM, they are sort of converging on similar algorithms. E.g. they say the important part of the cortex is that it's very sparse and has lateral inhibition. And you see leading researchers propose very similar ideas.

Whereas the stuff they do differently is mostly because they want to follow biological constraints. Like only local interactions, little memory, only single bits of information at a time. And these aren't restrictions that real computers have too much, so we don't necessarily need to copy biology in those respects and can do things differently, and even better.

Comment author: jsteinhardt 27 June 2015 04:34:09PM *  1 point [-]

Several of the above claims don't seem that true to me.

  • Statistical methods are also very general. And neural nets definitely need heuristics (LSTMs are basically a really good heuristic for getting NNs to train well).

  • I'm not aware of great success in Go? 54% accuracy is very hard to interpret in a vaccuum in terms of how impressed to be.

  • When statistical methods displaced logical methods it's because they led to lots of progress on lots of domains. In fact, the delta from logical to statistical was probably much larger than the delta from classical statistical learning to neural nets.

Comment author: Brendon_Wong 26 June 2015 12:38:19AM *  0 points [-]

There is definitely no prominent implementation of this concept and its related variations. Many nonprofits offer job training and give people computer and internet access, but starting what is essentially a virtual employment company to help people is not something I have heard about before, hence this program. It is possible that this idea was not implemented before in a charitable way because people start virtual employment companies for for-profit purposes, and those companies are very successful. As to the idea of connecting the impoverished with virtual employment services, it is possible many people are not aware of virtual employment services and thus have not implemented the idea.

Comment author: jsteinhardt 26 June 2015 08:38:58AM 2 points [-]

One important question is whether there used to be implementations of this concept, but for some reason they failed to gain traction. In the world where there is some unexpected pitfall to this plan, you would expect not to see any prominent implementations, but you might be able to find out what the pitfall is if you dig enough, and hopefully circumvent it.

Intuitively I would be quite surprised if no one has tried anything along these lines before, so understanding previous attempts and how they relate to yours seems like it would be quite valuable.

[link] Essay on AI Safety

10 jsteinhardt 26 June 2015 07:42AM

I recently wrote an essay about AI risk, targeted at other academics:

Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems

I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.

Comment author: Douglas_Knight 26 June 2015 06:40:36AM 0 points [-]

In what sense is minimax frequentist?

Comment author: jsteinhardt 26 June 2015 07:38:12AM *  0 points [-]

From Wikipedia:

Consider the problem of estimating a deterministic (not Bayesian) parameter...

ETA: While that page talks about estimating parameters, most of the math holds for more general actions as well.

Comment author: Douglas_Knight 25 June 2015 10:22:56PM 0 points [-]

When it comes to an action you must structure your knowledge in Bayesian terms to use to compute an expected utility. It is only when discussion detached knowledge that other options become available.

Comment author: jsteinhardt 26 June 2015 04:42:24AM 0 points [-]

??? This isn't true unless I misunderstood you. There are frequentist decision rules as well as Bayesian ones (minimax is one common such rule, though there are others as well).

Comment author: hydkyll 23 June 2015 09:11:16PM 2 points [-]

I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)

Comment author: jsteinhardt 25 June 2015 05:41:29AM 2 points [-]

Just wondering why you don't want to do machine learning? Many ML labs have at least some people who care about AI, and you'll get to learn a lot of useful technical material.

Comment author: Baughn 24 June 2015 05:06:01PM *  2 points [-]

So, some Inside View reasons to think this time might be different:

  • The results look better, and in particular, some of Google's projects are reproducing high-level quirks of the human visual cortex.

  • The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn't have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore's Law is producing a difference in kind.

That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all. We're making progress, and I wouldn't be surprised if deep learning is part of the first AGI.

Comment author: jsteinhardt 25 June 2015 05:38:15AM 0 points [-]

which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all

This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?

Comment author: hydkyll 23 June 2015 09:11:16PM 2 points [-]

I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)

Comment author: jsteinhardt 23 June 2015 11:12:13PM *  7 points [-]
Comment author: Lumifer 18 June 2015 03:30:20AM 1 point [-]

He is also gay which kills his chances for the presidency even more effectively than being an atheist. Not to mention that he is not a native-born American citizen which disqualifies him right off the bat.

Comment author: jsteinhardt 18 June 2015 05:35:27PM 0 points [-]

My response was directed more at the "most sane people" part.

Comment author: skeptical_lurker 17 June 2015 06:53:29PM *  2 points [-]

Elon Musk & Sergey Brin are atheists. Most (but not all) sane people are [EDIT: I probably should have said: most highly rational and highly effective people are atheist/agnostic]. 50% of the US will not vote for an atheist. A large proportion of the population think that hedge fund managers are evil, either because (a) they get paid too much or (b) they think markets work by forcing people to buy/sell things.

I don't mean to just shoot down your idea, but... I just don't think its going to happen in a democratic system without raising the sanity waterline a lot.

Comment author: jsteinhardt 18 June 2015 02:57:43AM 1 point [-]

Peter Thiel is Christian, I believe.

View more: Next