This post is eventually about partial agency. However, it's been a somewhat tricky point for me to convey; I take the long route. Epistemic status: slightly crazy.
I've occasionally said "Everything boils down to credit assignment problems."
What I really mean is that credit assignment pops up in a wide range of scenarios, and improvements to credit assignment algorithms have broad implications. For example:
Suppose you value some virtue V and you want to encourage people to be better at it. Suppose also you are something of a “thought leader” or “public intellectual” — you have some ability to influence the culture around you through speech or writing.
Suppose Alice Almost is much more V-virtuous than the average person — say, she’s in the top one percent of the population at the practice of V. But she’s still exhibited some clear-cut failures of V. She’s almost V-virtuous, but not quite.
How should you engage with Alice in discourse, and how should you talk about Alice, if your goal is to get people to be more V-virtuous?
Well, it depends on what your specific goal is.
Raising the Global Median
If your goal is to raise the general population’s median V–level,...
ohhhhh crazy twist at the end when she's your opponent. Sometimes I'm like "why can't utilitarian and deontological vegans get along? they eat the same food!" and well a) they usually do, but loud online ones don't and b) yes this post explains they become opponents in the future of the subculture. And the subculture is of course very important to both of them.
It's just sort of insane how this simple modeling of "make the subculture more inclusive / exclusive" escalates to a conflict far worse than in / not in the subculture. I guess I know this happens bu...
I have only a surface-level understanding of this topic, but active inference (one of the theories of intelligent agency) views brains (and agents) as prediction-error minimizers, and actions as a form of affecting the world in such a way that they minimize some extremely strongly held prediction (so strongly that it is easier to change the world to make the prediction error smaller).
My understanding mostly comes from this post by Scott Alexander:
... (read more)