3 min read

14

What I previously took from Cached Thoughts was primarily us failing to examine memes before trotting them out on cue. Some snippet of information A gets lodged *from outside* as the response to some question Q. Whenever Q appears we consult the lookup table and out pops A.

More recently I was reflecting on a pattern I have noticed in my work as a statistical programmer. I realised that I was following this pattern on a regular basis. In particular, by caching action sequences that worked.

It goes as follows. My code doesn't work in situation Q, so try X. It still doesn't work, so try Y. It still doesn't work, so try Z. It works. So, when presented with failure Q do XYZ.

Clearly, this is not proper science. However, it is fairly effective. When presented with that particular kind of failure again, XYZ will make things work. However, change the problem slightly and XYZ may well fail because we don't fully understand why it succeeded in the first instance.

Why not do proper science all the time then? I'd love to, in theory. However, if we look at just those three available actions: possibly the order matters, in which case there are 5 other orderings to explore. Possibly we could select some subsets to explore and there are four we didn't examine at all yet {{Y}, {Z}, {X,Z}, {Y,Z}}. The length two subsets each have two orderings, so there are something like 12 remaining action sequences to try out

Of course, with intuition, many of action sequences could be dismissed. But the point is that it's pretty time consuming and explodes with the action menu. The cost of testing is not just cognitive. E.g. suppose that testing one action sequence burns $1000 of resources, or training a model takes 24 hours. Or e.g. you just need this thing to work right now and have three ideas, do you faff about looking to understand the optimal solution or do you just throw all three at it and fix the problem?

So, I see why my brain often wants to stick with the tried and true method. The risky element is that it then wants to confabulate some reason why XYZ in particular worked. E.g. suppose X was ineffective and only YZ mattered, I'll find I still want to tell a story about why X was important. Consciously I'm now trying to avoid this by saying something like, "I suspected XYZ would work and it did, but I'm not sure how to credit that to the components."

What would a good heuristic for finding the minimal solution look like? I'm talking about general principles that could help pare back to minimal solutions independently of the underlying subject matter. My first thought would be to try substrings of the existing solution starting from the end (Z, YZ, Y).

The other issue is to make the action set legible. It's easy to make changes in the world without clearly recording what those changes are or bucketing them into discrete containers. We may have an idea for an action W and implement it, but afterwards realise it could be decomposed into Y and Z. But if we don't reflect on the decomposition, in future problem Q may well result in cache hit W. I'd certainly be interested in patterns people have for forcing these decompositions more regularly.

In summary: caching may save a lot more effort than I had previously given it credit for. While also routinely muddying my thinking much more often than I had though was the case, even in highly goal directed behaviour.

New Comment
1 comment, sorted by Click to highlight new comments since:

I really like combinatorial explosions as the reason for superstition/cargo culting as a concept handle. Thanks.