What if the state of agents is a kind of "make belief"? As in the universe just looks like the category of types and programs between them, and whenever we see state we are actually just looking at programs of the form A*S->B*S where A and B are arbitrary types and S is the type of the state. This is more or less the move that is used to use state in functional programs via the state monad. And that is probably not a coincidence ...
Epistemic Status: Ramblings of my current thoughts on computation.
I have been wondering about the nature of computation for some time now. For instance, what do we mean when we say the brain computes? I think the traditional answers are unsatisfactory. Chief amongst these issues is the role of observers in computation.
When an electronic calculator performs addition, it outputs the result by controlling pixels on a screen. Photons bounce off the screen and hit our retina and the brain performs a dizzying array of computational work in order to make sense of that retinal input. However, when discussing the computation the calculator performs, we talk about automata and locations in Chomsky hierarchies and the like, but we do not even consider the computational work the brain is doing in order to compute addition.
The issue is that while the dynamics of the calculator are well defined and independent of any observer, both the inputs and the outputs only make sense by virtue of an external entity doing some work. What kind of work is this observer doing?
Ultimately I think the observer must be doing the same kind of work the calculator is doing. Everything is (open) dynamical systems changing states, interacting with eachother.
When I think of a brain computing, I imagine a dynamical system taking inputs.
What are the "states" of the computation the brain is performing, given we think of it as a dynamical system? Who is the observer of our brain dynamics? In order to not have a homonculus, I think the meaningful computational states must be defined internally, without reference to external systems. Consider two distinct sets of neurons firing. Are these different computational states or not? The proposed answer here is that they are different only insofar as they constrain the future evolution of the brain in the different ways. If they constrain the dynamics similarly, then they are similar computational states.
There are many details here about what it means exactly to constrain future states in similar ways, but I want to keep things high level in this post.
One issue with this framing is that we seem to have drawn an arbitrary box around the neurons in the brain, and have been considering it "the system of interest", but is there any reason we couldn't add in arbitrary sets of particles outside of the brain? What about the brain plus the calculator? Couldn't we draw the box around a single neuron?
A single neuron in the middle of the brain is also a dynamical system that takes input. Some distinct sets of inputs will lead to very similar future states, and so we can also think of abstracted computational states arising in the single neuron.
But the brain as a whole is composed of these single neurons interacting, providing inputs and outputs to eachother. So what is the relationship between the computational states of the single neurons and those of the combined network?
Every dynamical system can be thought of as computing (though many might be performing trivial or uninteresting computation), and generally every computing system can be thought of as comprising many interacting dynamical systems. In turn, each of these dynamical systems can also be computing.
Ultimately I imagine a formal framework where every possible system of interest can be considered as a computing system, and can be put in relation with other systems of interest. Each system has its own (open) dynamics and the computational states and structure associated with those. Those entities are overlapping and have relationships to eachother. Perhaps in a framework like this "natural" computing entities (ie non-trivial systems of interest) will have specific signatures. I can also imagine concepts like agents and internal models and tool use being cached out in this kind of framing.
Here's a rough sketch of tool use. A brain (i.e. a dynamical system) orients itself such that the photons from a star (i.e. another dynamical system) hit its retina. Though the state of the star is changing quite dramatically, the retinal state arising from the photons hitting the retina constrain the brain states in very similar ways (i.e. the brain cannot distinguish between different dynamical states of the star). Now the brain puts a telescope (i.e. yet another dynamical system) in between it and the star. The photons from the star hit the telescope. Now, distinct dynamical states cause different dynamical states in the telescope, and those distinct dynamical states in the telescope cause distinct dynamical states in the retina which then constrain brain states in distinct ways. Now the brain can make see differences in the star that it couldn't before. How strange that by putting something in between us and a system, we can come to know that system better.