You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hydkyll comments on Open Thread, Jun. 22 - Jun. 28, 2015 - Less Wrong Discussion

6 Post author: Gondolinian 22 June 2015 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: cgag 23 June 2015 05:29:02AM 1 point [-]

I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.

I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".

These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?

Comment author: hydkyll 23 June 2015 09:08:47PM 0 points [-]

I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.

Comment author: Kaj_Sotala 24 June 2015 03:54:05AM 2 points [-]

If we genuinely had no idea of what neural nets were doing, NN research wouldn't be getting anywhere. But that's obviously not the case.

More to the point, there's promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).

Furthermore it's not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it's a neural network or not.