eli_sennesh comments on The Brain as a Universal Learning Machine - LessWrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 27 June 2015 12:25:26AM *  1 point [-]

Yes, but expecting any reasoner to develop well-grounded abstract concepts without any grounding in features and then care about them is... well, it's not actually complete bullshit, but expecting it to actually happen relies on solving some problems I haven't seen solved.

You could, hypothetically, just program your AI to infer "goodness" as a causal-role concept from the vast sums of data it gains about the real world and our human opinions of it, and then "maximize goodness", formulated as another causal role. But this requires sophisticated machinery for dealing with causal-role concepts, which I haven't seen developed to that extent in any literature yet.

Usually, reasoners develop causal-role concepts in order to explain what their feature-level concepts are doing, and thus, causal-role concepts abstracted over concepts that don't eventually root themselves in features are usually dismissed as useless metaphysical speculation, or at least abstract wankery one doesn't care about.

Comment author: Houshalter 27 June 2015 08:29:39AM 0 points [-]

I don't think you are responding the the correct comment. Or at least I have no idea what you are talking about.