latanius comments on What Curiosity Looks Like - Less Wrong

31 Post author: lukeprog 06 January 2012 09:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (283)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 07 January 2012 04:38:46AM 8 points [-]

They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs.

Really? The others make sense, but it's not clear this will be useful to a human trying to learn things themselves. If I want to notice patterns, "plug all of your information into a matrix and perform eigenvector decompositions" is probably not going to get me very far.

Comment author: latanius 07 January 2012 12:31:48PM 0 points [-]

True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of "how an algorithm feels from inside", you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.

So, as I see it, AI gives you a better grasp of "map vs. territory". Compared to "the map is the equations, the territory is what I see" you get "my mind is also a map, so where I see a pattern, maybe there is none". (See confirmation bias.)