Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Rational subagents will never bid more than the reward they actually expect.

One might bid higher to mislead others and benefit more in the long run.

we can resolve conflicts between beliefs and observations either by updating our beliefs, or by taking actions which make the beliefs come true

Or by ignoring the situation. I believe the agent should know that not every observation worse to consider.

I think that what active inference is missing is the ability to model strategic interactions between different goals


The strategic‑interaction gap you highlight echoes David Kirsh’s pragmatic‑vs‑epistemic distinction (I’m collaborating with him): current scale‑free frameworks capture pragmatic, world‑altering moves, yet overlook epistemic actions—internal simulations and information‑seeking that update the agent’s belief state without changing the environment—exactly where those goal negotiations seem to unfold.