There are some conceptual differences. In RL, you define a value function for all possible states. In active inference, you make desirable sense data a priori likely. Sensory space is not only lower-dimensional than (unobserved) state space, but you only need to define a single point in it, rather than a function on the whole space. It's often a much more natural way of defining goals and is more similar to control theory than RL. You're directly optimizing for a desired (and known) outcome rather than having to figure out what to optimize for by reinforcement. For example, if you want a robot to walk to some goal point, RL would have to make the robot walk around a bit, figure out that the goal point gives high reward, and then do it (in another rollout). In active inference (and control theory), the robot already knows where the goal point is (or rather, what the world looks like when standing at that point), and merely figures out a sequence of actions that get it there.
Another difference is that active inference automatically balances exploration and exploitation, while in RL it's usually a hyperparameter. In RL, it tends to look like doing many random actions early on, to figure out what gives reward, and later on do actions that keep the agent in high-reward states. In control theory, exploration is more bespoke, and built specifically for system identification (learning a model) or adaptive control (adjusting known parameters based on observations). In active inference, there's no aimless flailing about, but the agent can do any kind of experiment that minimizes future uncertainty - testing what beliefs and actions are likely to achieve the desired sense data. Here's a nice demo of that:
Hi,
I'm going to study at Aalto in Helsinki next semester, and I'd like to join your meetups. Are they primarily in Finnish or English?
OBS!: Studenterhuset closed on sundays, we moved to the nearest Baresso!
Nice, CAI is another similar approach, kind of in between the three already mentioned. I think "losing a degree of freedom" is very much a good thing, both computationally and functionally.