I’ve been trying to delve deeper into predictive processing theories of the brain, and I keep coming across Karl Friston’s work on “free energy”.

At first I felt bad for not understanding this. Then I realized I wasn’t alone. There’s an entire not-understanding-Karl-Friston internet fandom, complete with its own parody Twitter account and Markov blanket memes.

From the journal Neuropsychoanalysis (which based on its name I predict is a center of expertise in not understanding things):

At Columbia’s psychiatry department, I recently led a journal club for 15 PET and fMRI researhers, PhDs and MDs all, with well over $10 million in NIH grants between us, and we tried to understand Friston’s 2010 Nature Reviews Neuroscience paper – for an hour and a half. There was a lot of mathematical knowledge in the room: three statisticians, two physicists, a physical chemist, a nuclear physicist, and a large group of neuroimagers – but apparently we didn’t have what it took. I met with a Princeton physicist, a Stanford neurophysiologist, a Cold Springs Harbor neurobiologist to discuss the paper. Again blanks, one and all.

Normally this is the point at which I give up and say “screw it”. But almost all the most interesting neuroscience of the past decade involves this guy in one way or another. He’s the most-cited living neuroscientist, invented large parts of modern brain imaging, and received of the prestigious Golden Brain Award for excellence in neuroscience, which is somehow a real thing. His Am I Autistic – An Intellectual Autobiography short essay, written in a weirdly lucid style and describing hijinks like deriving the Schrodinger equation for fun in school, is as consistent with genius as anything I’ve ever read.

As for free energy, it’s been dubbed “a unified brain theory” (Friston 2010), a key through which “nearly every aspect of [brain] anatomy and physiology starts to make sense” (Friston 2009), “[the source of] the ability of biological systems to resist a natural tendency to disorder” (Friston 2012), an explanation of how life “inevitably and emergently” arose from the primordial soup (Friston 2013), and “a real life version of Isaac Asimov’s psychohistory” (description here of Allen 2018).

I continue to hope some science journalist takes up the mantle of explaining this comprehensively. Until that happens, I’ve been working to gather as many perspectives as I can, to talk to the few neuroscientists who claim to even partially understand what’s going on, and to piece together a partial understanding. I am not at all the right person to do this, and this is not an attempt to get a gears-level understanding – just the kind of pop-science-journalism understanding that gives us a slight summary-level idea of what’s going on. My ulterior motive is to get to the point where I can understand Friston’s recent explanation of depression, relevant to my interests as a psychiatrist.

Sources include Dr. Alianna Maren’s How To Read Karl Friston (In The Original Greek), Wilson and Golonka’s Free Energy: How the F*ck Does That Work, Ecologically?, Alius Magazine’s interview with Friston, Observing Ideas, and the ominously named Wo’s Weblog.

From these I get the impression that part of the problem is that “free energy” is a complicated concept being used in a lot of different ways.

First, free energy is a specific mathematical term in certain Bayesian equations.

I’m getting this from here, which goes into much more detail about the math than I can manage. What I’ve managed to extract: Bayes’ theorem, as always, is the mathematical rule for determining how much to weigh evidence. The brain is sometimes called a Bayesian machine, because it has to create a coherent picture of the world by weighing all the different data it gets – everything from millions of photoreceptors’ worth of vision, to millions of cochlear receptors worth of hearing, to all the other sense, to logical reasoning, to past experience, and so on. But actually using Bayes on all this data quickly gets computationally intractable.

Free energy is a quantity used in “variational Bayesian methods”, a specific computationally tractable way of approximating Bayes’ Theorem. Under this interpretation, Friston is claiming that the brain uses this Bayes-approximation algorithm. Minmizing the free energy quantity in this algorithm is equivalent-ish to trying to minimize prediction error, trying to minimize the amount you’re surprised by the world around you, and trying to maximize accuracy of mental models. This sounds in line with standard predictive processing theories. Under this interpretation, the brain implements predictive processing through free energy minimization.

Second, free energy minimization is an algorithm-agnostic way of saying you’re trying to approximate Bayes as accurately as possible.

This comes from the same source as above. It also ends up equivalent-ish to all those other things like trying to be correct in your understanding of the world, and to standard predictive processing.

Third, free energy minimization is a claim that the fundamental psychological drive is the reduction of uncertainty.

I get this claim from the Alius interview, where Friston says:

If you subscribe to the premise that that creatures like you and me act to minimize their expected free energy, then we act to reduce expected surprise or, more simply, resolve uncertainty. So what’s the first thing that we would do on entering a dark room — we would turn on the lights. Why? Because this action has epistemic affordance; in other words, it resolves uncertainty (expected free energy). This simple argument generalizes to our inferences about (hidden or latent) states of the world — and the contingencies that underwrite those states of affairs.

The discovery that the only human motive is uncertainty-reduction might come as a surprise to humans who feel motivated by things like money, power, sex, friendship, or altruism. But the neuroscientist I talked to about this says I am not misinterpreting the interview. The claim really is that uncertainty-reduction is the only game in town.

In a sense, it must be true that there is only one human motivation. After all, if you’re Paris of Troy, getting offered the choice between power, fame, and sex – then some mental module must convert these to a common currency so it can decide which is most attractive. If that currency is, I dunno, dopamine in the striatum, then in some reductive sense, the only human motivation is increasing striatal dopamine (don’t philosophize at me, I know this is a stupid way of framing things, but you know what I mean). Then the only weird thing about the free energy formulation is identifying the common currency with uncertainty-minimization, which is some specific thing that already has another meaning.

I think the claim (briefly mentioned eg here) is that your brain hacks eg the hunger drive by “predicting” that your mouth is full of delicious food. Then, when your mouth is not full of delicious food, it’s a “prediction error”, it sets off all sorts of alarm bells, and your brain’s predictive machinery is confused and uncertain. The only way to “resolve” this “uncertainty” is to bring reality into line with the prediction and actually fill your mouth with delicious food. On the one hand, there is a lot of basic neuroscience research that suggests something like this is going on. On the other, Wo’s writes about this further:

The basic idea seems to go roughly as follows. Suppose my internal probability function Q assigns high probability to states in which I’m having a slice of pizza, while my sensory input suggests that I’m currently not having a slice of pizza. There are two ways of bringing Q in alignment with my sensory input: (a) I could change Q so that it no longer assigns high probability to pizza states, (b) I could grab a piece of pizza, thereby changing my sensory input so that it conforms to the pizza predictions of Q. Both (a) and (b) would lead to a state in which my (new) probability function Q’ assigns high probability to my (new) sensory input d’. Compared to the present state, the sensory input will then have lower surprise. So any transition to these states can be seen as a reduction of free energy, in the unambitious sense of the term.
Action is thus explained as an attempt to bring one’s sensory input in alignment with one’s representation of the world.
This is clearly nuts. When I decide to reach out for the pizza, I don’t assign high probability to states in which I’m already eating the slice. It is precisely my knowledge that I’m not eating the slice, together with my desire to eat the slice, that explains my reaching out.
There are at least two fundamental problems with the simple picture just outlined. One is that it makes little sense without postulating an independent source of goals or desires. Suppose it’s true that I reach out for the pizza because I hallucinate (as it were) that that’s what I’m doing, and I try to turn this hallucination into reality. Where does the hallucination come from? Surely it’s not just a technical glitch in my perceptual system. Otherwise it would be a miraculous coincidence that I mostly hallucinate pleasant and fitness-increasing states. Some further part of my cognitive architecture must trigger the hallucinations that cause me to act. (If there’s no such source, the much discussed “dark room problem” arises: why don’t we efficiently minimize sensory surprise (and thereby free energy) by sitting still in a dark room until we die?)
The second problem is that efficient action requires keeping track of both the actual state and the goal state. If I want to reach out for the pizza, I’d better know where my arms are, where the pizza is, what’s in between the two, and so on. If my internal representation of the world falsely says that the pizza is already in my mouth, it’s hard to explain how I manage to grab it from the plate.
A closer look at Friston’s papers suggests that the above rough proposal isn’t quite what he has in mind. Recall that minimizing free energy can be seen as an approximate method for bringing one probability function Q close to another function P. If we think of Q as representing the system’s beliefs about the present state, and P as a representation of its goals, then we have the required two components for explaining action. What’s unusual is only that the goals are represented by a probability function, rather than (say) a utility function. How would that work?
Here’s an idea. Given the present probability function Q, we can map any goal state A to the target function Q^A, which is Q conditionalized on A — or perhaps on certain sensory states that would go along with A. For example, if I successfully reach out for the pizza, my belief function Q will change to a function Q^A that assigns high probability to my arm being outstretched, to seeing and feeling the pizza in my fingers, etc. Choosing an act that minimizes the difference between my belief function and Q^A is then tantamount to choosing an act that realizes my goal.
This might lead to an interesting empirical model of how actions are generated. Of course we’d need to know more about how the target function Q^A is determined. I said it comes about by (approximately?) conditionalizing Q on the goal state A, but how do we identify the relevant A? Why do I want to reach out for the pizza? Arguably the explanation is that reaching out is likely (according to Q) to lead to a more distal state in which I eat the pizza, which I desire. So to compute the proximal target probability Q^A we presumably need to encode the system’s more distal goals and then use techniques from (stochastic) control theory, perhaps, to derive more immediate goals.
That version of the story looks much more plausible, and much less revolutionary, than the story outlined above. In the present version, perception and action are not two means to the same end — minimizing free energy. The free energy that’s minimized in perception is a completely different quantity than the free energy that’s minimized in action. What’s true is that both tasks involve mathematically similar optimization problems. But that isn’t too surprising given the well-known mathematical and computational parallels between conditionalizing and maximizing expected utility.

It’s tempting to throw this out entirely. But part of me does feel like there’s a weird connection between curiosity and every other drive. For example, sex seems like it should be pretty basic and curiosity-resistant. But how often do people say that they’re attracted to someone “because he’s mysterious”? And what about the Coolidge Effect (known in the polyamory community as “new relationship energy”)? After a while with the same partner, sex and romance lose their magic – only to reappear if the animal/person hooks up with a new partner. Doesn’t this point to some kind of connection between sexuality and curiosity?

What about the typical complaint of porn addicts – that they start off watching softcorn porn, find after a while that it’s no longer titillating, move on to harder porn, and eventually have to get into really perverted stuff just to feel anything at all? Is this a sort of uncertainty reduction?

The only problem is that this is a really specific kind of uncertainty reduction. Why should “uncertainty about what it would be like to be in a relationship with that particular attractive person” be so much more compelling than “uncertainty about what the middle letter of the Bible is”, a question which almost no one feels the slightest inclination to resolve? The interviewers ask Friston something sort of similar, referring to some experiments where people are happiest not when given easy things with no uncertainty, nor confusing things with unresolvable uncertainty, but puzzles – things that seem confusing at first, but actually have a lot of hidden order within them. They ask Friston whether he might want to switch teams to support a u-shaped theory where people like being in the middle between too little uncertainty or too much uncertainty. Friston…does not want to switch teams.

I do not think that “different laws may apply at different levels”. I see a singular and simple explanation for all the apparent dialectics above: they are all explained by minimization of expected free energy, expected surprise or uncertainty. I feel slightly puritanical when deflating some of the (magical) thinking about inverted U curves and “sweet spots”. However, things are just simpler than that: there is only one sweet spot; namely, the free energy minimum at the bottom of a U-shaped free energy function […]
This means that any opportunity to resolve uncertainty itself now becomes attractive (literally, in the mathematical sense of a random dynamical attractor) (Friston, 2013). In short, as nicely articulated by (Schmidhuber, 2010), the opportunity to answer “what would happen if I did that” is one of the most important resolvers of uncertainty. Formally, the resolution of uncertainty (aka intrinsic motivation, intrinsic value, epistemic value, the value of information, Bayesian surprise, etc. (Friston et al., 2017)) corresponds to salience. Note that in active inference, salience becomes an attribute of an action or policy in relation to the lived world. The mathematical homologue for contingencies (technically, the parameters of a generative model) corresponds to novelty. In other words, if there is an action that can reduce uncertainty about the consequences of a particular behavior, it is more likely to be expressed.
Given these imperatives, then the two ends of the inverted U become two extrema on different dimensions. In a world full of novelty and opportunity, we know immediately there is an opportunity to resolve reducible uncertainty and will immediately embark on joyful exploration — joyful because it reduces uncertainty or expected free energy (Joffily & Coricelli, 2013). Conversely, in a completely unpredictable world (i.e., a world with no precise sensory evidence, such as a dark room) there is no opportunity and all uncertainty is irreducible — a joyless world. Boredom is simply the product of explorative behavior; emptying a world of its epistemic value — a barren world in which all epistemic affordance has been exhausted through information seeking, free energy minimizing action.
Note that I slipped in the word “joyful” above. This brings something interesting to the table; namely, the affective valence of shifts in uncertainty — and how they are evaluated by our brains.

The only thing at all I am able to gather from this paragraph – besides the fact that apparently Karl Friston cites himself in conversation – is the Schmidhuber reference, which is actually really helpful. Schmidhuber is the guy behind eg the Formal Theory Of Fun & Creativity Explains Science, Art, Music, Humor, in which all of these are some form of taking a seemingly complex domain (in the mathematical sense of complexity) and reducing it to something simple (discovering a hidden order that makes it more compressible). I think Friston might be trying to hint that free energy minimization works in a Schmidhuberian sense where it applies to learning things that suddenly make large parts of our experience more comprehensible at once, rather than just “Here are some numbers: 1, 5, 7, 21 – now you have less uncertainty over what numbers I was about to tell you, isn’t that great?”

I agree this is one of life’s great joys, though maybe me and Karl Friston are not a 100% typical subset of humanity here. Also, I have trouble figuring out how to conceptualize other human drives like sex as this same kind complexity-reduction joy.

One more concern here – a lot of the things I read about this equivocate between “model accuracy maximization” and “surprise minimization”. These end really differently. Model accuracy maximization sounds like curiosity – you go out and explore as much of the world as possible to get a model that precisely matches reality. Surprise minimization sounds like locking yourself in a dark room with no stimuli, then predicting that you will be in a dark room with no stimuli, and never being surprised when your prediction turns out to be right. I understand Friston has written about the so-called “dark room problem”, but I haven’t had a chance to look into it as much as I should, and I can’t find anything that takes one or the other horn of the equivocation and says “definitely this one”.

Fourth, okay, all of this is pretty neat, but how does it explain all biological systems? How does it explain abiogenesis? And when do we get to the real-world version of psychohistory? In his Alius interview, Friston writes:

I first came up with a prototypical free energy principle when I was eight years old, in what I have previously called a “Gerald Durrell” moment (Friston, 2012). I was in the garden, during a gloriously hot 1960s British summer, preoccupied with the antics of some woodlice who were frantically scurrying around trying to find some shade. After half an hour of observation and innocent (childlike) contemplation, I realized their “scurrying” had no purpose or intent: they were simply moving faster in the sun — and slower in the shade. The simplicity of this explanation — for what one could artfully call biotic self-organization — appealed to me then and appeals to me now. It is exactly the same principle that underwrites the ensemble density dynamics of the free energy principle — and all its corollaries.

How do the wood lice have anything to do with any of the rest of this?

As best I can understand (and I’m drawing from here and here again), this is an ultimate meaning of “free energy” which is sort of like a formalization of homeostasis. It goes like this: consider a probability distribution of all the states an organism can be in. For example, your body can be at (90 degrees F, heart rate 10), (90 degrees F, heart rate 70), (98 degrees F, heart rate 10), (98 degrees F, heart rate 70), or any of a trillion other different combinations of possible parameters. But in fact, living systems successfully restrict themselves to tiny fractions of this space – if you go too far away from (98 degrees F, heart rate 70), you die. So you have two probability distributions – the maximum-entropy one where you could have any combination of heart rate and body temperature, and the one your body is aiming for with a life-compatible combination of heart rate and body temperature. Whenever you have a system trying to convert one probability distribution into another probability distribution, you can think of it as doing Bayesian work and following free energy principles. So free energy seems to be something like just a formal explanation of how certain systems display goal-directed behavior, without having to bring in an anthropomorphic or teleological concept of “goal-directedness”.

Friston mentions many times that free energy is “almost tautological”, and one of the neuroscientists I talked to who claimed to half-understand it said it should be viewed more as an elegant way of looking at things than as a scientific theory per se. From the Alius interview:

The free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle.

So we haven’t got a real-life version of Asimov’s psychohistory, is what you’re saying?

But also:

The Bayesian brain hypothesis is a corollary of the free energy principle and is realized through processes like predictive coding or abductive inference under prior beliefs. However, the Bayesian brain is not the free energy principle, because both the Bayesian brain hypothesis and predictive coding are incomplete theories of how we infer states of affairs.
This missing bit is the enactive compass of the free energy principle. In other words, the free energy principle is not just about making the best (Bayesian) sense of sensory impressions of what’s “out there”. It tries to understand how we sample the world and author our own sensations. Again, we come back to the woodlice and their scurrying — and an attempt to understand the imperatives behind this apparently purposeful sampling of the world. It is this enactive, embodied, extended, embedded, and encultured aspect that is lacking from the Bayesian brain and predictive coding theories; precisely because they do not consider entropy reduction […]
In short, the free energy principle fully endorses the Bayesian brain hypothesis — but that’s not the story. The only way you can change “the shape of things” — i.e., bound entropy production — is to act on the world. This is what distinguishes the free energy principle from predictive processing. In fact, we have now taken to referring to the free energy principle as “active inference”, which seems closer to the mark and slightly less pretentious for non-mathematicians.

So maybe the free energy principle is the unification of predictive coding of internal models, with the “action in the world is just another form of prediction” thesis mentioned above? I guess I thought that was part of the standard predictive coding story, but maybe I’m wrong?

Overall, the best I can do here is this: the free energy principle seems like an attempt to unify perception, cognition, homeostasis, and action.

“Free energy” is a mathematical concept that represents the failure of some things to match other things they’re supposed to be predicting.

The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.

Perception and cognition are both attempts to create accurate models that match the world, thus minimizing free energy.

Homeostasis and action are both attempts to make reality match mental models. Action tries to get the organism’s external state to match a mental model. Homeostasis tries to get the organism’s internal state to match a mental model. Since even bacteria are doing something homeostasis-like, all life shares the principle of being free energy minimizers.

So life isn’t doing four things – perceiving, thinking, acting, and maintaining homeostasis. It’s really just doing one thing – minimizing free energy – in four different ways – with the particular way it implements this in any given situation depending on which free energy minimization opportunities are most convenient. Or something. All of this might be a useful thing to know, or it might just be a cool philosophical way of looking at things, I’m still not sure.

Or something like this? Maybe? Somebody please help?


Discussion question for those of you on the subredditif the free energy principle were right, would it disprove the orthogonality thesis? Might it be impossible to design a working brain with any goal besides free energy reduction? Would anything – even a paperclip maximizer – have to start by minimizing uncertainty, and then add paperclip maximization in later as a hack? Would it change anything if it did?

New Comment
43 comments, sorted by Click to highlight new comments since:

(Posting here rather than SSC because I wrote the whole comment in markdown before remembering that SSC doesn't support it).

We had a guest lecture from Friston last year and I cornered him afterwards to try to get some enlightenment (notes here). I also spent the next few days working through the literature, using a multi-armed bandit bandit as a concrete problem (notes here ).

Very few of the papers have concrete examples. Those that do often skip important parts of the math and use inconsistent/ambiguous notation. He doesn't seem to have released any of the code for his game-playing examples.

The various papers don't all even implement the same model - the free energy principle seems to be more a design principle than a specific model.

The wikipedia page doesn't explain much but at least uses consistent and reasonable notation.

Reinforcement learning or active inference has most of a worked model, and is the closest I've found to explaining how utility functions get encoded into meta-priors. It also contains:

When friends and colleagues first come across this conclusion, they invariably respond with; “but that means I should just close my eyes or head for a dark room and stay there”. In one sense this is absolutely right; and is a nice description of going to bed. However, this can only be sustained for a limited amount of time, because the world does not support, in the language of dynamical systems, stable fixed-point attractors. At some point you will experience surprising states (e.g., dehydration or hypoglycaemia). More formally, itinerant dynamics in the environment preclude simple solutions to avoiding surprise; the best one can do is to minimise surprise in the face of stochastic and chaotic sensory perturbations. In short, a necessary condition for an agent to exist is that it adopts a policy that minimizes surprise.

I am leaning towards 'the emperor has no clothes'. In support of this:

  • Friston doesn't explain things well, but nobody else seems to have produced an accessible worked example either, even though many people claim to understand the theory and think that is important.
  • Nobody seems to have has used this to solve any novel problems, or even to solve well-understood trivial problems.
  • I can't find any good mappings/comparisons to existing models. Are there priors that cannot be represented as utility functions, or vice versa? What explore/exploit tradeoffs do free-energy models lead to, or can they encode any given tradeoff?

At this point I'm unwilling to invest any further effort into the area, but I could be re-interested if someone were to produce a python notebook or similar with a working solution for some standard problem (eg multi-armed bandit).

The various papers don't all even implement the same model - the free energy principle seems to be more a design principle than a specific model.`

Bingo. Friston trained as a physicist, and he wants the free-energy principle to be more like a physical law than a computer program. You can write basically any computer program that implements or supports variational inference, throw in some action states as variational parameters, and you've "implemented" the free-energy principle _in some way_.

Overall, the Principle is more of a domain-specific language than a single unified model, more like "supervised learning" than like "this 6-layer convnet I trained for neural style transfer."

Are there priors that cannot be represented as utility functions, or vice versa?

No. They're isomorphic, via the Complete Class Theorem. Any utility/cost function that grows sub-super-exponentially (ie: for which Pascal's Mugging doesn't happen) can be expressed as a distribution, and used in the free-energy principle. You can get the intuition by thinking, "This goal specifies how often I want to see outcome X (P), versus its disjoint cousins Y and Z that I want to see such-or-so often (1-P)."

What explore/exploit tradeoffs do free-energy models lead to, or can they encode any given tradeoff?

The is actually one of the Very Good things about free-energy models: since free-energy is "Energy - Entropy", or "Exploit + Explore", cast in the same units (bits/nats from info theory), it theorizes a principled, prescriptive way to make the tradeoff, once you've specified how concentrated the probability mass is under the goals in the support set (and thus the multiplicative inverse of the exploit term's global optimum).

We ought to be able to use this to test the Principle empirically, I think.

(EDIT: Dear God, why was everything bold!?)

No. They're isomorphic, via the Complete Class Theorem. Any utility/cost function that grows sub-super-exponentially (ie: for which Pascal's Mugging doesn't happen) can be expressed as a distribution, and used in the free-energy principle. You can get the intuition by thinking, "This goal specifies how often I want to see outcome X (P), versus its disjoint cousins Y and Z that I want to see such-or-so often (1-P)."

Can you please link me to more on this? I was under the impression that pascal's mugging happens for any utility function that grows at least as fast as the probabilities shrink, and the probabilities shrink exponentially for normal probability functions. (For example: In the toy model of the St. Petersburg problem, the utility function grows exactly as fast as the probability function shrinks, resulting in infinite expected utility for playing the game.)

Also: As I understand them, utility functions aren't of the form "I want to see X P often and Y 1-P often." They are more like "X has utility 200, Y has utility 150, Z has utility 24..." Maybe the form you are talking about is a special case of the form I am talking about, but I don't yet see how it could be the other way around. As I'm thinking of them, utility functions aren't about what you see at all. They are just about the world. The point is, I'm confused by your explanation & would love to read more about this.

Can you please link me to more on this? I was under the impression that pascal's mugging happens for any utility function that grows at least as fast as the probabilities shrink, and the probabilities shrink exponentially for normal probability functions. (For example: In the toy model of the St. Petersburg problem, the utility function grows exactly as fast as the probability function shrinks, resulting in infinite expected utility for playing the game.)

The Complete Class Theorem says that bounded cost/utility functions are isomorphic to posterior probabilities optimizing their expected values. In that sense, it's almost a trivial result.

In practice, this just means that we can exchange the two whenever we please: we can take a probability and get an entropy to minimize, or we can take a bounded utility/cost function and bung it through a Boltzmann Distribution.

Also: As I understand them, utility functions aren't of the form "I want to see X P often and Y 1-P often." They are more like "X has utility 200, Y has utility 150, Z has utility 24..." Maybe the form you are talking about is a special case of the form I am talking about, but I don't yet see how it could be the other way around. As I'm thinking of them, utility functions aren't about what you see at all. They are just about the world. The point is, I'm confused by your explanation & would love to read more about this.

I was speaking loosely, so "I want to see X" can be taken as, "I want X to happen". The details remain an open research problem of how the brain (or probabilistic AI) can or should cash out, "X happens" into "here are all the things I expect to observe when X happens, and I use them to gather evidence for whether X has happened, and to control whether X happens and how often".

For a metaphor of why you'd have "probabilistic" utility functions, consider it as Bayesian uncertainty: "I have degree of belief P that X should happen, and degree of belief 1-P that something else should happen."

One of the deep philosophical differences is that both Fristonian neurosci and Tenenbaumian cocosci assume that stochasticity is "real enough for government work", and so there's no point in specifying "utility functions" over "states" of the world in which all variables are clamped to fully determined values. After all, you yourself as a physically implemented agent have to generate waste heat, so there's inevitably going to be some stochasticity (call it uncertainty that you're mathematically required to have) about whatever physical heat bath you dumped your own waste heat into.

(That was supposed to be a reference to Eliezer's writing on minds doing thermodynamic work (which free-energy minds absolutely do!), not a poop joke.)

Actually, here's a much simpler, more intuitive way to think about probabilistically specified goals.

Visualize a probability distribution as a heat map of the possibility space. Specifying a probabilistic goal then just says, "Here's where I want the heat to concentrate", and submitting it to active inference just uses the available inferential machinery to actually squeeze the heat into that exact concentration as best you can.

When our heat-map takes the form of "heat" over dynamical trajectories, possible "timelines" of something that can move, "squeezing the heat into your desired concentration" means exactly "squeezing the future towards desired regions". All you're changing is how you specify desired regions: from giving them an "absolute" value (that can actually undergo any linear transformation and be isomorphic) to giving them a purely "relative" value (relative to disjoint events in your sample space).

This is fine, because after all, it's not like you could really have an "infinite" desire for something finite-sized in the first place. If you choose to think of utilities in terms of money, the "goal probabilities" are just the relative prices you're willing to pay for a certain outcome: you start with odds, the number of apples you'll trade for an orange, and convert from odds to probabilities to get your numbers. It's just using "barter" among disjoint random events instead of "currency".

I'm confused so I'll comment a dumb question hoping my cognitive algorithms are sufficiently similar to other LW:ers, such that they'll be thinking but not writing this question.

"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"

Where does this reasoning go wrong?

>"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"

I think what I'd have to ask here is: if you only want apples, why are you spending your money on oranges? If you will not actually pay me 1 unit for an orange, why do you claim you value oranges at 1 unit?

Another construal: you value oranges at 1 orange per 1 unit because if I offer you a lottery over those and let you set the odds yourself, you will choose to set them to 50/50. You're indifferent to which one you receive, so you value them equally. We do the same trick with apples and find you value them at 3 units per 1 apple.

I now offer you a lottery between receiving 3 apples and 1 orange, and I'll let you pay 3 units to tilt the odds by one expected apple. Since the starting point was 1.5 expected apples and 0.5 expected oranges, and you insist you want only 3 expected apples and 0 expected oranges, I believe I can make you end up paying more than 3 units per apple now, despite our having established that as your "price".

The lesson is, I think, don't offer to pay finite amounts of money for outcomes you want literally zero of, as someone may in fact try to take you up on it.

The problem with the typeface on LW comments is that I, l and 1 look really damn similar. 

That was much more informative than most of the papers. Did you learn this by parsing the papers or from another better source?

Honestly, I've just had to go back and forth banging my head on Friston's free-energy papers, non-Friston free-energy papers, and the ordinary variational inference literature -- for the past two years, prior to which I spent three years banging my head on the Josh Tenenbaum-y computational cog-sci literature and got used to seeing probabilistic models of cognition.

I'm now really fucking glad to be in a PhD program where I can actually use that knowledge.

Oh, and btw, everyone at MIRI was exactly as confused as Scott is when I presented a bunch of free-energy stuff to them last March.

Sorry for the bold, sometimes our editor does weird things with copy-paste and bolds everything you pasted. Working on a fix for that, but it’s an external library and that’s always a bit harder than fixing our code.

Re: the "when friends and colleagues first come across this conclusion..." quote:

A world where everybody's true desire is to rest in bed as much as possible, but where they grudgingly take the actions needed to stay alive and maintain homeostasis, seems both very imaginable, and also very different from what we observe.

Agreed. 'Rest in bed as much as possible but grudgingly take the actions needed to stay alive' sounds a lot like depression, but there exist non-depressed people who need explaining.

I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter.

In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.

>I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter.

There's absolutely trouble here. "Minimizing surprise" always means, to Friston, minimizing sensory surprise under a generative model: . The problem is that, of course, in the course of constructing this, you had to marginalize out all the interesting variables that make up your generative model, so you're really looking at or something similar.

Mistaking "surprise" in this context for the actual self-information of the empirical distribution of sense-data makes the whole thing fall apart.

>In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.

If you look closely, Friston's downright cheating in that paper. First he "immerses" his car in its "statistical bath" that teaches it where to go, with only perceptual inference allowed. Then he turns off perceptual updating, leaving only action as a means of resolving free-energy, and points out that thusly, the car tries to climb the mountain as active inference proceeds.

It would be interesting if anyone knows of historical examples where someone had a key insight, but nonetheless fulfilled your "emperor has no clothes" criteria.

Hi,

I now work in a lab allied to both the Friston branch of neuroscience, and the probabilistic modeling branch of computational cognitive science, so I now feel even more arrogant enough to comment fluently.

I’m gonna leave a bunch of comments over the day as I get the spare time to actually respond coherently to stuff.

The first thing is that we have to situate Friston’s work in its appropriate context of Marr’s Three Levels of cognitive analysis: computational (what’s the target?), algorithmic (how do we want to hit it?), and implementational (how do we make neural hardware do it?).

Friston’s work largely takes place at the algorithmic and implementational levels. He’s answering How questions, and then claiming that they answer the What questions. This is rather like unto, as often mentioned, formulating Hamiltonian Mechanics and saying, “I’m solved physics by pointing out that you can write any physical system in terms of differential equations for its conserved quantities.” Well, now you have to actually write out a real physical system in those terms, don’t you? What you’ve invented is a rigorous language for talking about the things you aim to explain.

The free-energy principle should be thought of like the “supervised loss principle”: it just specifies what computational proxy you’re using for your real goal. It’s as rigorous as using probabilistic programming to model the mind (caveat: one of my advisers is a probabilistic programming expert).

Now, my seminar is about to start soon, so I’ll try to type up a really short step-by-step of how we get to active inference. Let’s assume the example where I want to eat my nice slice of pizza, and I’ll try to type something up about goals/motivations later on. Suffice to say, since “free-energy minimization” is like “supervised loss minimization” or “reward maximization”, it’s meaningless to say that motivation is specified in free-energy terms. Of course it can be: that’s a mathematical tautology. Any bounded utility/reward/cost function can be expressed as a probability, and therefore a free-energy — this is the Complete Class Theorem Friston always cites, and you can make it constructive using the Boltzmann Distribution (the simplest exponential family) for energy functions.

1) Firstly, free-energy is just the negative of the Evidence Lower Bound (ELBO) usually maximized in variational inference. You take a (a model of the world whose posterior you want to approximate), and a (a model that approximates it), and you optimize the variational parameters (the parameters with no priors or conditional densities) of by maximizing the ELBO, to get a good approximation to (probability of hypotheses, given data). This is normal and understandable and those of us who aren’t Friston do it all the time.

2) Now you add some variables to : the body’s proprioceptive states, its sense of where your bones are and what your muscles are doing. You add a , with some conditional to show how other senses depend on body position. This is already really helpful for pure prediction, because it helps you factor out random noise or physical forces acting on your body from your sensory predictions to arrive at a coherent picture of the world outside your body. You now have .

3) For having new variables in the posterior, , you now need some new variables in . Here’s where we get the interesting insight of active inference: if the old was approximated as , we can now expand to . Instead of inferring a parameter that approximates the proprioceptive state, we infer a parameter that can “compromise” with it: the actual body moves to accommodate as much as possible, while also adjusts itself to kinda suit what the body actually did.

Here’s the part where I’m really simplifying what stuff does, to use more of a planning as inference explanation than “pure” active inference. I could talk about “pure” active inference, but it’s too fucking complicated and badly-written to get a useful intuition. Friston’s “pure” active inference papers often give models that would have very different empirical content from each-other, but which all get optimized using variational inference, so he kinda pretends they’re all the same. Unfortunately, this is something most people in neuroscience or cognitive science do to simplify models enough to fit one experiment well, instead of having to invent a cognitive architecture that might fit all experiments badly.

4) So now, if I set a goal by clamping some variables in (or by imposing “goal” priors on them, clamping them to within some range of values with noise), I can’t really just optimize to fit the new clamped model. is really , and has to approximate . Instead, I can only optimize to fit . Actually doing so reaches a “Bayes-optimal” compromise between my current bodily state and really moving. Once already carries a good dynamical model (through time) of how my body and senses move (trajectories through time), changing as a function of time lets me move as I please, even assuming my actual movements may be noisy with respect to my motor commands.

That’s really all “active inference” is: variational inference with body position as a generative parameter, and motor commands as the variational parameter approximating it. You set motor commands to get the body position you want, then body position changes noisily based on motor commands. This keeps getting done until the ELBO is maximized/free-energy minimized, and now I’m eating the pizza (as a process over time).

(point 2) Why e (D|D′)P(D′|H) and not P(D|D′,H)P(D′|H)?

Ok, now a post on motivation, affect, and emotion: attempting to explain sex, money, and pizza. Then I’ll try a post on some of my own theories/ideas regarding some stuff. Together, I’m hoping these two posts address the Dark Room Problem in a sufficient way. HEY SCOTT, you’ll want to read this, because I’m going to link a paper giving a better explanation of depression than I think Friston posits.

The following ideas come from one of my advisers who studies emotion. I may bungle it, because our class on the embodied neuroscience of this stuff hasn’t gotten too far.

The core of “emotion” is really this thing we call core affect, and it’s actually the core job of the brain, any biological brain, at all. This is: regulate the states of the internal organs (particularly the sympathetic and parasympathetic nervous systems) to keep the viscera functioning well and the organism “doing its job” (survival and reproduction).

What is “its job”? Well, that’s where we actually get programmed-in, innate “priors” that express goals. Her idea is, evolution endows organisms with some nice idea of what internal organ states are good, in terms of valence (goodness/badness) and arousal (preparedness for action or inaction, potentially: emphasis on the sympathetic or parasympathetic nervous system’s regulatory functions). You can think of arousal and sympathetic/parasympathetic as composing a spectrum between the counterposed poles of “fight or flight” and “rest, digest, reproduce”. Spending time in an arousal state affects your internal physiology, so it then affects valence. We now get one of the really useful, interesting empirical predictions to fall right out: young and healthy people like spending time in high-arousal states, while older or less healthy people prefer low-arousal states. That is, even provided you’re in a pleasurable state, young people will prefer more active pleasures (sports, video gaming, sex) while old people will prefer passive pleasures (sitting on the porch with a drink yelling at children). Since this is all physiology, basically everything impacts it: what you eat, how you socialize, how often you mate.

The brain is thus a specialized organ with a specific job: to proactively, predictively regulate those internal states (allostasis), because reactively regulating them (homeostasis) doesn’t work as well). Note that the brain how has its own metabolic demands and arousal/relaxation spectrum, giving rise to bounded rationality in the brain’s Bayesian modeling and feelings like boredom or mental tiredness. The brain’s regulation of the internal organs proceeds via closed-loop predictive control, which can be made really accurate and computationally efficient. We observe anatomically that the interoceptive (internal perception) and visceromotor (exactly what it says on the tin) networks in the brain are at the “core”, seemingly at the “highest level” of the predictive model, and basically control almost everything else in the name of keeping your physiology in the states prescribed as positive by evolution as useful proxies for survival and reproduction.

Get this wrong, however, and the brain-body system can wind up in an accidental positive feedback that moves it over to a new equilibrium of consistently negative valence with either consistent high arousal (anxiety) or consistent low arousal (depression). Depression and anxiety thus result from the brain continually getting the impression that the body is in shitty, low-energy, low-activity states, and then sending internal motor commands designed to correct the problem, which actually, due to brain miscalibration, make it worse. You sleep too much, you eat too much or too little, you don’t go outside, you misattribute negative valence to your friends when it’s actually your job, etc. Things like a healthy diet, exercise, and sunlight can try to bring the body closer to genuinely optimal physiological states, which helps it yell at the brain that actually you’re healthy now and it should stuff fucking shit up by misallocating physiological resources.

“Emotions” wind up being something vaguely like your “mood” (your core affect system’s assessment of your internal physiology’s valence and arousal) combined with a causal “appraisal” done by the brain using sensory data, combined with a physiological and external plan of action issued by the brain.

You’re not motivated to sit in a Dark Room because the “predictions” that your motor systems care about are internal, physiological hyperparameters which can only be revised to a very limited extent, or which can be interpreted as some form of reinforcement signalling. You go into a Dark Room and your external (exteroceptive, in neuro-speak) senses have really low surprise, but your internal senses and internal motor systems are yelling that your organs say shit’s fucked up. Since your organs say shit’s fucked up, “surprise” is now very high, and you need to go change your external sensory and motor variables to deal with that shit.

Note that you can sometimes seek out calming, boring external sensory states, because your brain has demanded a lot from your metabolism and physiology lately, so it’s “out of energy” and you need to “relax your mind”.

Pizza becomes positively valenced when you are hungry, especially if you’re low on fats and glucose. Sex becomes most salient when your parasympathetic nervous system is dominant: your body believes that it’s safe, and the resources available for action can now be devoted to reproduction over survival.

Note that the actual physiological details here could, once again, be very crude approximations of the truth or straight-up wrong, because our class just hasn’t gotten far enough to really hammer everything in.

Scott writes on tumblr:

I don’t think I even understand the most basic point about how a probability distribution equals a utility function. What’s the probability distribution equal to “maximize paperclips”? Is it “state of the world with lots of paperclips - 100%, state of the world with no paperclips, 0%”? How do you assign probability to states of the world with 5, 10, or 200 paperclips?

I know nothing about this discussion, but this one is easy:

The utility function U(w) corresponds to the distribution .

(i.e. , where Z is a meaningless number we choose to make the total probability add up to 1.)

Without math: every time you add one paperclip to a possible world, you make it 10% more likely. On this perspective, there is a difference between kind of wanting paperclips and really wanting paperclips--if you really want paperclips, adding one paperclip to the world makes it twice as likely. This determines how you trade off paperclips vs. other kinds of surprise.

Maximizing expected log probability under this distribution is exactly the same as maximizing the expectation of U.

You can combine the term with other facts you know about the world , by multiplying them (and then adjusting the normalization constant appropriately).

A very similar formulation is often used in inverse reinforcement learning (MaxEnt IRL).

Another part of the picture that isn't complicated is that the exact same algorithms can be used for probabilistic inference (finding good explanations for the data) and planning (finding a plan that achieves some goal). In fact this connection is useful and people in AI sometimes exploit it. It's a bit deeper than it sounds but not that deep. See planning as inference, which Eli mentions above. It seems worth understanding this simple idea before trying to understand some extremely confusing pile of ideas.

Another important distinction: there are two different algorithms one might describe as "minimizing prediction error:"

I think the more natural one is algorithm A: you adjust your beliefs to minimize prediction error (after translating your preferences into "optimistic beliefs"). Then you act according to your beliefs about how you will act. This is equivalent to independently forming beliefs and then acting to get what you want, it's just an implementation detail.

There is a much more complicated family of algorithms, call them algorithm B, where you actually plan in order to change the observations you'll make in the future, with the goal of minimizing prediction error. This is the version that would cause you to e.g. go read a textbook, or lock yourself in a dark room. This version is algorithmically way more complicated to implement, even though it maybe sounds simpler. It also has all kinds of weird implications and it's not easy to see how to turn it into something that isn't obviously wrong.

Regardless of which view you prefer, it seems important to recognize the difference between the two. In particular, evidence for the us using algorithm A shouldn't be interpreted as evidence that we use algorithm B.

It sounds like Friston intends algorithm B. This version is pretty different from anything that researchers in AI use, and I'm pretty skeptical (based on observations of humans and the surface implausibility of the story rather than any knowledge about the area).

Paul, this is very helpful! Finally I understand what this "active inference" stuff is about. I wonder whether there were any significant theoretical results about these methods since Rawlik et al 2012?

Oh hey, so that's the original KL control paper. Saved!

The utility function U(w) corresponds to the distribution P(w)∝exp(U(w)).

Not so fast.

Keep in mind that the utility function is defined up to an arbitrary positive affine transformation, while the softmax distribution is invariant only up to shifts: will be different distribution depending on the inverse temperature (the higher, the more peaked the distribution will be on the mode), while in von Neumann–Morgenstern theory of utility, and represent the same preferences for any positive .

Maximizing expected log probability under this distribution is exactly the same as maximizing the expectation of U.

It's not exactly the same.

Let's assume that there are two possible world states: 0 and 1, and two available actions: action A puts the world in state 0 with 99% probability () while action B puts the world in state 0 with 50% probability ().

Let

Under expected utility maximizaiton, action A is clearly optimal.

Now define

The expected log-probability (the negative cross-entropy) is nats, while is , hence action B is optimal.

You do get the action A as optimal if you reverse the distributions in the negative cross-entropies ( and ), but this does not correspond to how inference is normally done.

To get behavior you need preferences + temperature, that's what I meant by saying there was a difference between wanting X a little and wanting X a lot.

I agree that the formulation I gave benefits actions that generate a lot of entropy. Really you want to consider causal entropy of your actions. I think that means for each sequence of actions I agree that's less elegant.

if the free energy principle were right, would it disprove the orthogonality thesis?

As far as I can tell, it would not - unless you think that the determinism of physics also disproves the orthogonality thesis (because if the world is deterministic, then you can't get every possible motivation, right? Just the ones that actually happen in the world).

Free energy explains behaviours and their opposites - it explains why someone punches someone or refrains from doing it, eats sushi or hamburger or soylent for lunch or skips it entirely, does/doesn't, wants/doesn't want, stays/leaves... builds paperclips/doesn't build paperclips...

This doesn't mean that free energy is vacuous, any more than sometimes predicting sunshine and sometimes predicting snow makes weather prediction vacuous. It means that weather prediction/free energy need some other set of inputs to predict an action. In the case of weather prediction, this is things like pressure, wind speed, satellite imagery, etc... In the case of free energy, it's less clear what the other inputs are, but motivation and preferences seem perfectly valid inputs.

(for the Bayesian version of Free Energy, the evidence and the priors can serve as the - variable - inputs)

Ok, now the post where I go into my own theory on how to avoid the Dark Room Problem, even without physiological goals.

The brain isn’t just configured to learn any old predictive or causal model of the world. It has to learn the distal causes of its sensory stimuli: the ones that reliably cause the same thing, over and over again, which can be modeled in a tractable way.

If I see a sandwich (which I do right now, it’s lunchtime), one of the important causes is that photons are bouncing off the sandwich, hitting my eyes, and stimulating my retina. However, most photons don’t make me see a sandwich, they make me see other things, and trying to make a model complex enough that exact photon behavior becomes parameters instead of noise is way too complicated.

So instead, I model the cause of my seeing a sandwich as being the sandwich. I see a sandwich because there really is a sandwich.

The useful part about this is that since I’m modeling the consistent, reliable, repeatable causes, these same inferences also support and explain my active interventions. I see a sandwich because there really is a sandwich, and that explains why I can move my hands and mouth to eat the sandwich, and why when I eat the sandwich, I taste a sandwich. Photons don’t really explain any of that without recourse to the sandwich.

However, if I were to reach for the sandwich and find that my hands pass through it, I would have to expand my hypothesis space to include ghost sandwiches or living in a simulation. Some people think the brain can do this with nonparametric models: probabilistic models of infinite stuff, of which I use finite pieces to make predictions. When new data comes in that supports a more complex model, I just expand the finite piece of the infinite object that I’m actually using. The downside is, a nonparametric model will always, irreducibly have a bit of extra uncertainty “left over” when compared to a parametric model that started from the right degree of complexity. The nonparametric has more things to be uncertain about, so it’s always a little more uncertain.

How can these ideas apply to the Dark Room? Well, if I go into a Dark Room, I’m actually sealing myself off from the distal causes of sensations. The walls of the room block out what’s going on outside the room, so I have no idea when, for instance, someone might knock on the door. Really knowing what’s going on requires confidence about the distal causal structure of my environment, not just confidence about the proximal structure of a small local environment. Otherwise, I could always just say, “I’m certain that photons are hitting my eyeballs in some reasonable configuration”, and I’d never need to move or do any inferences at all.

It gets worse! If my model of those distal causes is nonparametric, it always has extra leftover uncertainty. No matter how confident I am about the stuff I’ve seen, I never have complete evidence that I’ve seen everything, that there isn’t an even bigger universe out there I haven’t observed yet.

So really “minimizing prediction error” with respect to a nonparametric model of distal causes ends up requiring that I not only leave my room, but that I explore and control as much of the world as possible, at all scales which ever significantly impact my observations, without limit.

The thing you are minimizing by going outside isn't prediction error for sense data, it's a sort of expected prediction error over a spatial extent in your model. I think both of these are valid concepts to think about, so it's not like this argument shows that prediction error is "really" about building a model of the world and then ensuring that it's both correct and complete - it's an argument about what's more reasonable to model humans as doing.

Of course, once you have two possibilities, that usually means you have infinite possibilities. I see where this could lead to people generating a whole family of formalisms. But I still feel like this route leads to oversimplification.

For example, sometimes people are happy to just fool their sense-data - we take anesthetics, or look at pornography, or drink diet soda. But sometimes people aren't - the pictures-of-relationships industry is much smaller than the porn industry, people buy free-range beef, or a genuine Rembrandt.

Oh, I wasn't really trying at all to talk about what prediction-error minimization "really does" there, more to point out that it changes radically depending on your modeling assumptions.

The "distal causes" bit is also something I really want to find the time and expertise to formalize. There are studies of causal judgements grounding moral responsibility of agents and I'd really like to see if we can use the notion of distal causation to generalize from there to how people learn causal models that capture action-affordances.

My ulterior motive is to get to the point where I can understand Friston’s recent explanation of depression, relevant to my interests as a psychiatrist.

I started reading this but it's annoying to read so I'm going to stop. I don't know what's standard in the field here but there seems to be way more jargon than necessary. Also the explanation of depression does not ring true to me:

In this sense, we might conjecture that major depression occurs when the brain is certain that it will encounter an uncertain environment, i.e. the world is inherently volatile, capricious, unpredictable and uncontrollable.

Offhand, this sounds like a description of anxiety to me, not depression. I first really experienced something I'd call anxiety a little over a year ago, and the internal experience felt a lot to me like tremendous uncertainty about whether X was happening, where X felt like a life-and-death situation. Whereas, to the extent that I've experienced anything like depression, it felt more like certainty that nothing good will ever happen.

I do get the sense that this paper has something valuable to say but I don't want to put in the additional effort to figure out what that thing is at the moment. I'm also distracted by the repeated references to mean and precision being sufficient statistics, which I can't make sense of; those are sufficient statistics for, say, Gaussian distributions, but certainly not in general.

I am curious about the dark room problem. Even if we accept surprise minimization, the brain developed in an environment where a dark room was generally not an option: the sun came up, night fell, the weather changed. Aside from the external environment, there remains the problem of thirst and hunger.

I also note that we did go to a lot of trouble to build as many rooms as possible with constant light, constant temperature, and constant humidity. Locking oneself alone in a dark room and not coming out is the archetype of depression, and depressed people are less prone to optimistic biases - this seems like it would correlate to 'more successful in resolving their surprise'. Depression: the state where no stimulus is a superstimulus.

What about surprise being based on a pattern of change rather than a state? I imagine if the sun failed to rise one morning the freakout rate would approach 100%. I also note that people get into habits, and become attached to them even if they are bad. This amounts to a predictable change in stimuli we impose through action. If we peg to the expected change in stimuli, then a constant stimuli is the case where the expected change is 0.

In line with these thoughts, it initially appears to me that the dark room problem is not an actual problem; the distinction between accuracy-maximization and surprise-minimization is unclear to me.

if the free energy principle were right, would it disprove the orthogonality thesis? Might it be impossible to design a working brain with any goal besides free energy reduction? Would anything – even a paperclip maximizer – have to start by minimizing uncertainty, and then add paperclip maximization in later as a hack? Would it change anything if it did?

My own take on this is "kinda no", but mostly because I already see the orthogonality thesis as holding only for a sufficiently general intelligence. That is, there are probably lots of things that are not orthogonal to what we think of as general intelligence, and it's only after you get those basics up to a certain level that you get an intelligence general enough for orthogonality to kick in such that capabilities can be orthogonal to telos for capabilities and telos above a certain generality threshold but below that threshold you get capabilities and telos that are correlated.

For example, I suspect phenomenal consciousness to be a property that correlates capabilities and telos around the intentional relation, and free energy may well be looking at a related notion from the perspective of how telos is achieved, and to a certain extent it looks to me like anything that does anything "intelligent" will follow these patterns if it can be called "general". We still have to worry about orthogonality, though, because these are very low level kinds of telos that shape an agents goals in correlation to its capabilities but not so strongly as to allow steering away from dangerous territory.

In short, orthogonality is probably not true all the way down, but it doesn't matter because the orthogonality thesis is more about orthogonality after generality of intelligence rather than before.

Discussion question for machine ethics researchers – if the free energy principle were right, would it disprove the orthogonality thesis?

No, and for two reasons.

1) The free energy principle is descriptive only, as Friston says in the Alius interview. It (apparently) makes no predictions about behaviour, much less about terminal goals.

2) It applies specifically to biological organisms. Most of Scott's sources note that this behaviour arose through natural selection, to handle certain specific types of uncertainty related to staying alive. It has no bearing whatsoever on, say, alien intelligences, much less computers, which can be programmed with any mind we can design.

This assumes that the free energy principle is true & correct, which I’m not sure that it is. Being unfalsifiable is a bad start, as is the fact that Karl Friston’s work is impenetrable. Most simplified explanations of the free energy principle are either equally impenetrable or seem somehow confused (this one is difficult to quantify, but reading this hasn’t really given me any insight into behaviour; if this is actually revolutionary, there should be some combination of words that makes the true meaning shine through like the sun on a cloudless day) and as far as I know, nobody has used free energy or its related concepts to achieve anything remarkable. Strong evidence that this is probably pointless.

Promoted to frontpage.

Reposting my comment from SSC:

I [just now] read the 2009 letter in Cell. It was very clear that this was a proposal for a model of human perception and action that was not at all tautological. But it didn’t explain why we’d expect this model to be true… instead, it had a lot of handwaving, and for “more details,” referred me to the 2010 Nature paper. Which I then skimmed, looking for the derivation or motivation of these equations (e.g. from figure 1 in Friston 2009). Of which I found exactly nothing.
Basically, when presented with an idea, it’s often hard to tell whether it’s true in a vacuum. But it’s not so hard to evaluate why it’s true – there are so many false things that if you believe something without good reason, it’s probably false. So rather than delving into issues with the idea itself, which might lead to engaging with some very vague writing, it’s a lot easier to just note that the mathematical parts of this model are pulled directly from the posterior.

But this definitely seems like the better website to talk to Eli Sennesh on :)

>But this definitely seems like the better website to talk to Eli Sennesh on :)

Somewhat honored, though I'm not sure we've met before :-).

I'm posting here mostly by now, because I'm... somewhat disappointed with people saying things like, "it's bullshit" or "the mathematical parts of this model are pulled directly from the posterior".

IMHO, there's a lot to the strictly neuroscientific, biological aspects of the free-energy theory, and it integrates well with physics (good prediction resists disorder, "Thermodynamics of Prediction") and with evolution (predictive regulation being the unique contribution of the brain).

Mathematically, well, I'm sure that a purely theoretical probabilist or analyst can pick everything up quickly.

Computationally and psychologically, it's a hot mess. It feels, to me at least, like trying to explain a desktop computer by recourse to saying, "It successively and continually attempts to satisfy its beliefs under the logical model inherent to its circuitry", that is, to compute a tree of NANDS of binary inputs. Is the explanation literally true? Yes! Why? Because it's a universal explanation of the most convenient way we know of to implement Turing-complete computation in hardware.

But bullshit? No, I don't think so.

I wind up putting Friston in the context of Tenenbaum, Goodman, Gershman, etc. Ok, it makes complete sense that the most primitive hardware-level operations of the brain may be probabilistic. We have plenty of evidence that the brain does probabilistic inference on multiple levels, including the seeming "top-down" ones like decision making and motor control. Having evolved one useful mechanism, it makes sense that evolution would just try to put more and more of them together, like Lego blocks, occasionally varying the design slightly to implement a new generative model or inference method within the basic layer or microcircuit doing the probabilistic job.

That's still not a large-scale explanation of everything. It's a language. Telling you the grammar of C or Lisp doesn't teach you the architecture of Half Life 2. Showing that it's a probability model just shows you that you can probably write it in Church or Pyro given enough hardware, and those allow all computably sampleable distributions -- an immensely broad class of models!

On the other hand, if you had previously not even known what C or Turing machines were, and were just wondering how the guns and headcrabs got on the shiny box, you've made a big advance, haven't you?

I think about predictive brain models by trying to parse them this as something like probabilistic programs:

  • What predictions? That is, what original generative model , with what observable variables?
  • What inference methods? If variational, what sort of guide model ? If Monte Carlo, what proposal ?
  • Most importantly, which predictions are updated (via inference), and which are fulfilled (via action)?

The usual way to spot the latter in an active inference paper is to look for an equation saying something like . That denotes control states being sampled from a Boltzmann Distribution whose energy function is the divergence between empirical observations and actual goals.

The usual way to spot the latter in a computational cognitive science paper is just to look for an equation saying something like , which just says that you sample actions which make your goal most likely via ordinary conditionalizing.

Like I said, all this probabilistic mind stuff is a language to learn, which then lets you read lots of neuroscience and cognitive science papers more fluently. The reward is that, once you understand it, you get a nice solid intuition that, on the one hand, some papers might be mistaken, but on the other hand, with a few core ideas like hierarchical probability models and sampling actions from inferences, we've got an "assembly language" for describing a wide variety of possible cognitions.

I'm not qualified to comment on the literature in general or how research goes - if you say that treating the brain as drawing actions from a Boltzmann distribution on this weird divergence is useful, I believe you. But it seems like you can extract very specific claims from Friston 2009, like the brain having a model from perceptions to a distribution over "causes" (model parameters), and each step of learning in the brain reducing the KL divergence (specifically!) between a mutable internal generative model of "causes" and the fixed sense-inferred "causes." This is the sort of thing that I failed to find a justification for, and therefore am treating as having a tenuous relation to real brains. And I don't think this is just nitpicking, because fixed inference of causes is used to get fixed motivations that have preferences over causes.

So we could quibble over the details of Friston 2009, *buuuuut*...

I don't find it useful to take Friston at 110% of his word. I find it more useful to read him like I read all other cognitive modelers: as establishing a language and a set of techniques whose scientific rigor he demonstrates via their application to novel experiments and known data.

He's no more an absolute gold-standard than, say, Dennett, but his techniques have a certain theoretical elegance in terms of positing that the brain is built out of very few, very efficient core mechanisms, applied to abundant embodied training data, instead of very many mechanisms with relatively little training or processing power for each one.

Rather than quibble over him, I think that this morning in the shower I got what he means on a slightly deeper level, and now I seriously want to write a parody entitled, "So You Want to Write a Friston Paper".

Somewhat meta question: is it better to comment here, or on SSC? Both?

We have Karma here, and probably generally feel more comfortable making references to content in the sequences or other material on LessWrong, and probably also hard-sciences in general (though less sure about that). SSC has a larger readership and I expect Scott will be keeping more up-to-speed with the comments on SSC. So that seems to be the tradeoff for me.

Crossposting to both locations seems pretty reasonable to me, and I would definitely appreciate that, if it isn’t too inconvenient for you.

Reading this I was wondering when Scott suddenly got way more confident with math. Turns out the quotes are messed up in this version. Only the first paragraph of each long block-quote is quoted properly.

Thanks, fixed.

Maybe rather then 'free energy' there is a better term? How about "minimum energy machine" - the idea here being that what the brain does is to generate a model or simulation that tries to generate an internal state that exactly cancels out the incoming stimuli. What is used for error detection is something called "edge detection" which is a way to detect both movement and amount by comparing the input and the memory or simulation and then modifying the model/simulation so once again match the incoming.

This is interesting because all the biological/neurological processing each sense needs and vision has many different delays SO the model/simulation needs to run in the future, taking into account the various processing delays, so that in the moment the senses arrive they are greeted with their opposite to edge detect with.

the different senses have different worlds with the innermost being the body/brain which also controls muscles. this is divided into two parts - the pre and the active - and there is a time delay in between.

When the brain is generating a good and accurate TRANSFORM that accurately predicts the incoming stimulus then there is a minimum energy point and with edge detection and ml/ai networks this minimum is maintained and so the transform that the brain created is a model of the outside (and inside body/brain senses) that is constantly seeking to minimize energy - the energy necessary to match the outside stimulus that would overwhelm the brain - think of the problem of seizures.

and one can imagine that this type of processing - from the earliest of life - works long before there are eyes and ears as in the dark depths of the ocean there was still feeling and temperature sensing and the need to find and eat or consume the BMR required to maintain life......the internal feelings then being part of what the minimum energy brain must also minimize by using the transform/model/simulation......hence the feeling of hunger or being satiated.

and the above can and must be done at all levels of life, from the first beginnings of memory which is necessary to make a model or have a history from which to make choices.....and this type of communication/cooperation can be seen to arise out of things that want to flock or are drawn to each other and their own type of life......before sex......but then this would soon invent sex because it is an excellent way to produce the higher level model or species from the individuals.


there is an interesting podcast on the evolution of cooperation.........

flocking behavior only requires a couple of simple rules:

1. Get closer to your neighbor.

2. Do not hit or harm your neighbor

(and the flock arises - now lets add the reward)

3. watch a member or two of the flock thus formed that is NOT next to you, but maybe 7 or so away, and do what they do if they SURPRISE you by jumping outside the flock!!!!!

And from the above you now get automatically almost the protective value or reward of a flock.

Now add numbers based on food requirements for the flock. As the group grows in size and volume there are necessary tasks that must be accomplished and what happens next is "Division of Labor" because what is needed for the new superorganism is a way to feed and take care of the group which now provides ones living by taking up a specialty inside the colony.

And yes - recent research shows that all life - each having to grow from an egg - being feed even in an ant colony in the larva and pupa stage differently depending upon location - (what and what NOT is feed to them by the workers determines temperament/personality even though each has the same genes........

so the idea of adapting to lower energy requirements while staying alive does explain much else in life and life's development.....

....AND......evolution is nolonger an uphill process but is rather like a stream running down a littered hillside that breaks into tributaries as it descends, predictably getting better and better at getting the resources necessary to maintain the bodies order against entropy......

as a joke - or only partly so - life happens where the river empties into the ocean, where all the good stuff grows and all the animals drink and communication is best accomplished..

...that's why we get port cities!