Manfred comments on Open Thread, Jun. 22 - Jun. 28, 2015 - Less Wrong

6 Post author: Gondolinian 22 June 2015 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: cgag 23 June 2015 05:29:02AM 1 point [-]

I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.

I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".

These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?

Comment author: Manfred 23 June 2015 05:20:27PM *  3 points [-]

There isn't really a "LW approach to AI," but there are some factors at work here. If there's one universal LW buzzword, it's "Bayesian methods," though that's not an AI design, one might call it a conceptual stance. There's also LW's focus on decision theory, which, while still not an AI design, is usually expressed as short, "model-dependent" algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.

As to whether there's some tribal conflict to be worried about here, nah, probably not.