You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on Zooming your mind in and out - Less Wrong Discussion

8 Post author: John_Maxwell_IV 06 July 2015 12:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: Gunnar_Zarncke 06 July 2015 08:18:03PM 1 point [-]

Reminds me of a discussion about learning styles compared to search algorithms I once participated in on c2.

Quote:

Me: I don't use depth-first-learning but rather A*-learning, meaning, that I have a learning goal in mind all the time (since I can remember) and try to learn everything, that contributes to this goal (minimizes the distance to the goal). The goal is motivated by a curiosity how things work or could be made to work (at an abstract scale, including social problems). The distance to this goal is measured by the usefulness of the knowledge to achieve this. Interestingly I have found, that learning this way all the pieces of information quickly form a coherent picture and fit together. Though I have to admit, that this might be my subjective impression and I hope that this beautiful picture is not an artifact of my mind. As to the personal usefulness of this approach, I think, that it provides me with a clear profile as well as an in-depth expertise in my field.

Matthew: Do you not worry that you will find a local minimum and mistake it for the global one? (cf. simulated annealing) It sounds like you have ("this beautiful picture") [I think I meant have worried, not have mistaken!]. In my case, I suspect I chase the goal that appears to be necessary or relevant at the time.

Me: No. My impression is, that the universal knowledge space is rather flat. But what I do worry about, is, whether I will ever come near to my goal and whether this goal is really worth it. Lately I discovered, that the space around my optimum seems to be really flat. Meaning, that I now have the problem, that determining the direction of further research gets difficult. On the other hand, this might mean, that my personal world model (locally centered around my learning-goal) may be rather consistent now. I might try a random walk to break out of this - possibly - local maximum.