A couple of years ago, when I delved into the details of machine learning algorithm/neuroscience model HTM, I had an idea about the nature of thought, specifically problem-solving thought. In the lingo of this site, it’s a theory about the internal workings of the Babble generator.
HTM theory, which now might go by „thousand brains hypothesis“ posits that the mammalian cortex learns a model of the world in the form of temporal sequences of sparse distributed representations. This world model provides you with predictions of future events. You can use it to create „rollouts“ and sample the future as predicted by your past experience.
Certainly sounds like a useful thing to have. The point where I got stuck, was when I considered novel problem solving.
Consider the case of trying to solve a mathematical problem. Sure enough our Babble generator creates plenty of tries and seems to continue doing so even after we have thrown the towel. If these were random rollouts our Eureka moments would consist of fully formed sequences that pop into our heads. Instead we experience an „insight“, a compact, non-sequential what-ever-it-is, that allows us to unroll the correct sequence consciously.
Now imagine a squirrel encountering a car tire in the woods. There is a nut on top of the tire. Triggered by the situation „standing in front of a unknown black round thing with a nut on top“ how does the babble generator create a solution - i.e. a sequence of actions that leads to a positive outcome? The squirrel has often jumped or climbed on things to retrieve nuts, but it has never seen a tire before. Why would the babble generator produce a jumping or climbing sequence?
What needs to happen, according to my theory, is that from the initial trigger, i.e. from the initial representation of the situation, many aspects have to be abstracted away until the final backbone of the situation triggers a realistic sequence with a positive outcome. For the squirrel all or most aspects of the tire have to be abstracted away until the situation is very similar to a more common situation like standing in front of a tree or stone with a nut on top.
The more complex the situation and the more uncommon the aspects that have to be abstracted away, the longer it will take until the babble generator will hit upon the correct abstract trigger to create a solution that is not pruned as unrealistic or useless. For a difficult mathematical problem it might search for the correct abstraction subconciously for quite some time.
A couple of years ago, when I delved into the details of machine learning algorithm/neuroscience model HTM, I had an idea about the nature of thought, specifically problem-solving thought. In the lingo of this site, it’s a theory about the internal workings of the Babble generator.
HTM theory, which now might go by „thousand brains hypothesis“ posits that the mammalian cortex learns a model of the world in the form of temporal sequences of sparse distributed representations. This world model provides you with predictions of future events. You can use it to create „rollouts“ and sample the future as predicted by your past experience.
Certainly sounds like a useful thing to have. The point where I got stuck, was when I considered novel problem solving.
Consider the case of trying to solve a mathematical problem. Sure enough our Babble generator creates plenty of tries and seems to continue doing so even after we have thrown the towel. If these were random rollouts our Eureka moments would consist of fully formed sequences that pop into our heads. Instead we experience an „insight“, a compact, non-sequential what-ever-it-is, that allows us to unroll the correct sequence consciously.
Now imagine a squirrel encountering a car tire in the woods. There is a nut on top of the tire. Triggered by the situation „standing in front of a unknown black round thing with a nut on top“ how does the babble generator create a solution - i.e. a sequence of actions that leads to a positive outcome? The squirrel has often jumped or climbed on things to retrieve nuts, but it has never seen a tire before. Why would the babble generator produce a jumping or climbing sequence?
What needs to happen, according to my theory, is that from the initial trigger, i.e. from the initial representation of the situation, many aspects have to be abstracted away until the final backbone of the situation triggers a realistic sequence with a positive outcome. For the squirrel all or most aspects of the tire have to be abstracted away until the situation is very similar to a more common situation like standing in front of a tree or stone with a nut on top.
The more complex the situation and the more uncommon the aspects that have to be abstracted away, the longer it will take until the babble generator will hit upon the correct abstract trigger to create a solution that is not pruned as unrealistic or useless. For a difficult mathematical problem it might search for the correct abstraction subconciously for quite some time.
I call this mechanism „search by abstraction“.