TL;DR: meditation is a process for altering the harmonics of brainwaves to produce good brain states.

Pulling together a few threads:

I think we can infer a lot from these observations, but I'll leave those for another post.

New Comment
1 comment, sorted by Click to highlight new comments since:

I talked with Mike Johnson a bunch about this at a recent SSC meetup, and think that CSHW are a cool way to look at brain activity but that associating them directly with valence of experience (the simple claim "harmonic CSHW ≡ good") has a bunch of empirical consequences that seem probably false to me. (This is a good thing, in many respects, because it points at a series of experiments that might convince one of us!)

An observation is that I think this is a 'high level' or 'medium level' description of what's going on in the brain, in a way that makes it sort of difficult to buy as a target. If I think about meditation as something like having one thing on the stack, or as examining your code to refactor it, or directing your attention at itself, then I can see what's going on in a somewhat clear way. And it's easy to see how having one thing on the stack might increase the harmony (defined as a statistical property of a distribution of energies in the CSHW), but the idea that the goal was to increase the harmony and having one thing on the stack just happens to do so seems unsupported.

I do like that this has an answer for 'dark room' objections that seems superior to the normal 'priors' story for Friston-style approaches, in that you're trying to maximize a property (tho you still smuggle in the goals through the arrangement of the connectome, but that's fine because they had to come from somewhere).

Meditation, and anything that sets up harmonic neuronal oscillation, makes brain activity more symmetric, hence better or good.

I think this leap is bigger than it might seem, because it's not clear that you have control loops on the statistical properties of your brain as a whole. It reads something like a type error that's equivocating between individual loops and the properties of many loops.

Now, it may turn out that 'simplicity' is the right story here, where harmony / error-minimization / etc. are just very simple things to build and so basically every level of the brain operates on that sort of principle. In a draft of the previous paragraph I had a line that said "well, but it's not obvious that there's a control loop operating on the control loops that has this sort of harmony as an observation" but then I thought "well, you could imagine this is basically what consciousness / the attentional system is doing, or that this is true for boring physical reasons where the loops are all swimming in the same soup and prefer synchronization."

But this is where we need to flesh out some implementation details and see if it makes the right sorts of predictions. In particular, I think a 'multiple drives' model makes sense, and lines up easily with the hierarchical control story, but I didn't see a simple way that it also lines up with the harmony story. (In particular, I think lots of internal conflicts make sense as two drives fighting over the same steering wheel, but a 'maximize harmony' story needs to have really strong boundary conditions to create the same sorts of conflicts. Now, really strong boundary conditions is pretty sensible, but still makes it sort of weird as a theory of long-term arrangement, because you should expect the influence of the boundary conditions to be something the long-term arrangement can adjust.)