If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.
I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".
These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?
I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.