Clarity comments on Open Thread, Jun. 22 - Jun. 28, 2015 - Less Wrong

6 Post author: Gondolinian 22 June 2015 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread.

Comment author: cgag 23 June 2015 05:29:02AM 1 point [-]

I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.

I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".

These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?

Comment author: Manfred 23 June 2015 05:20:27PM *  3 points [-]

There isn't really a "LW approach to AI," but there are some factors at work here. If there's one universal LW buzzword, it's "Bayesian methods," though that's not an AI design, one might call it a conceptual stance. There's also LW's focus on decision theory, which, while still not an AI design, is usually expressed as short, "model-dependent" algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.

As to whether there's some tribal conflict to be worried about here, nah, probably not.

Comment author: hydkyll 23 June 2015 09:08:47PM 0 points [-]

I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.

Comment author: Kaj_Sotala 24 June 2015 03:54:05AM 2 points [-]

If we genuinely had no idea of what neural nets were doing, NN research wouldn't be getting anywhere. But that's obviously not the case.

More to the point, there's promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).

Furthermore it's not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it's a neural network or not.