gaffa comments on What Curiosity Looks Like - Less Wrong

31 Post author: lukeprog 06 January 2012 09:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (283)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 07 January 2012 04:38:46AM 8 points [-]

They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs.

Really? The others make sense, but it's not clear this will be useful to a human trying to learn things themselves. If I want to notice patterns, "plug all of your information into a matrix and perform eigenvector decompositions" is probably not going to get me very far.

Comment author: gaffa 07 January 2012 01:47:03PM *  1 point [-]

At least for me, I've found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don't apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.