Wei_Dai comments on Atheism = Untheism + Antitheism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (179)
In behavioral interpretation, you weight observations, or effects of possible strategies (on observations/actions), not the way territory is. The base level is the agent, and rules of its game with environment. Everything else describes the form of this interaction, and answers the questions not about the underlying reality, but about how the agent sees it. If the distinction you are making doesn't reach the level of influencing what the agent experiences, it's absent from this semantics: no weighting, no moving parts, no distinction at all.
For a salient example: if the agent in the same fixed internal state is instantiated multiple times both in the same environment at the same time, and at different times, or even in different environments, with different probabilities for some notion of that, all of these instances and possibilities together go under one atomic black-box symbol for the territory corresponding to that state of the agent, with no internal structure. The structure however can be represented in preferences for strategies or sets of strategies for the agent.
Vladimir, are you proposing this "behavioral interpretation" for an AI design, or for us too? Is this an original idea of yours? Can you provide a link to a paper describing it in more detail?
I'm generalizing/analogizing from the stuff I read on coalgebras, and in this case I'm pretty sure the idea makes sense, it's probably explored elsewhere. You can start here, or directly from Introduction to Coalgebra: Towards Mathematics of States and Observations (PDF).