Mike Johnson

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I really enjoyed this piece and think it’s an important topic.

The question of how the brain implements priors, and how they can become maladaptively ‘trapped’, is an open question. I suggested last year that we could combine the “hemo-neural hypothesis” that bloodflow regulates the dynamic range of nearby neurons/nerves, with the “latch-bridge mechanism” where smooth muscle (inclusive of vascular muscle) can lock itself in a closed position. I.e. vascular tension is a prediction (Bayesian prior) about the world,  and such patterns of microtension can be stored in a very durable form (“smooth muscle latches”) that can persist for days, weeks, months, years, decades.

This paints psychological release, releasing a trapped prior, and vasomuscular release as the same thing.

https://opentheory.net/2023/07/principles-of-vasocomputation-a-unification-of-buddhist-phenomenology-active-inference-and-physical-reflex-part-i/

I do think that lower-frequency harmonics will be both better defined, and more useful for hanging functional or computational stories on. (Agree that low-harmonics-as-operators-on-bayesian-priors could be a very generative frame. I'm a little skeptical of the current stories being told of functional localization; some of the localization could indeed be spatial, but some could be temporal (information tacitly encoded into harmonics). I think the proof is in the pudding in terms of what each hypothesis can let us do. Probably no one-size-fits-all.

Thanks for the thoughtful comment. I would generally endorse the claims you make, but would differ on your analogy about psychologists not needing to know about advances in neuroscience for the same reason programmers don't need to know about transistors, and the conclusion you draw from that.

First, I'd stand behind the theme that:

The problem facing neuroscience in 2018 is that we have a lot of experimental knowledge about how neurons work– and we have a lot of observational knowledge about how people behave– but we have few elegant compressions for how to connect the two. CSHW promises to do just that, to be a bridge from bottom-up neural dynamics – things we can measure – to high-level psychological/phenomenological/psychiatric phenomena – things we care about. And a bottom-up bridge like this should also allow continuous improvement as our understanding of the fundamentals improve, as well as significant unification across disciplines: instead of psychology, psychiatry, philosophy, and so on each having their own (slightly incompatible) ontologies, a true bottom-up approach can unify these different ways of knowing and serve as a common platform, a lingua franca for high-level brain dynamics.

In short, brain-stuff isn't neatly modularized like computer-stuff, and so advances in the lower levels of the stack can have big impacts about how things are (or should be) done higher up in the stack. If CSHW does turn out to be generative in the ways I list, I think it'll have direct impact on psychology and psychiatry; they couldn't help but change. In particular, a theory which might allow unification across the different psychological sciences is a big deal.

Re: your conclusion, I think it's easy to underestimate how risk-adverse academia is, and the degree to which academic politics plays a role in which ideas gain traction and which don't. The idea that psychologists and psychiatrists are currently working from good models, and if a better model comes around, science will straightforwardly prove this new model is better and the community will naturally and quickly adopt it -- I think it would be great if all of these things were true, but I have little confidence any of them are.

Instead, I think there are huge structural problems which allow considerable arbitrage if you have a better-than-average model of what's going on. Granted, the bit about how "all neuroscientists, all philosophers, all psychologists, and all psychiatrists" should drop what they're doing and learn CSHW is hyperbole. But I think it's the correct direction to push.

I don't think folk psychology does a good job at ontology when it comes to speaking about subjects like depression or willpower.

I'd agree with that.

How does what you propose there differ from General Semantics?

I don't know enough about General Semantics to offer much here, but from a quick reading of Wikipedia it feels like GS is aimed at a slightly different goal, and relies on a much different algorithmic stack, than a CSHW-inspired theory of language and meaning. Would be glad to hear your thoughts.

Hi shminux,

You're welcome to follow the academic literature trail I link to. CSHW is a new paradigm so it would definitely would benefit from a close critical review, if you're able to provide that. (If you'd rather just critique something as pattern-matching to "crackpot red flags" and "pretty pictures" you can do that too, but I find this to be a content-free strategy of avoiding dealing with any of my object-level or methodological claims, and think that it needlessly lowers the level of discussion.)

I mention my personal intuitions about "limitations and potential failures" near the end of my piece; . My expectation is that CSHW, along with the predictive coding framework, is the most plausible route for neuroscience to develop knowledge in the five spheres I identified. ("Most plausible" does not mean "sure thing" of course.) The hard work still needs to be done of course. If you know of more plausible ways to unify neuroscience I'd be happy to read about it.