There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.).
I think this is as far away from truth as it can possibly be.
Also, conservation of energy is a consequence of pretty much simple and nice properties of environment, not arbitrary. The reason why it's hard to keep in physics simulations is because accumulating errors in numerical approximations violate said properties (error accumulation is obviously not symmetric in time).
I think you are wrong in purely practical sense. We don't care about most of energy. Oceans have a lot of energy in them, but we don't care because 99%+ of it is unavailable, because it is in high-entropy state. We care about exploitation of free energy, which is present only low-entropy high-information states. And, as expected, we learn to notice such states very quickly because they are very cheap sources of uncertainty reduction in world model.
I don't mean that rationalists deny thermodynamics, just that it's not taking a sufficient center-stage, in particular when reasoning on large-scale phenomena than physics or chemistry where it's hard to precisely quantify the energies, or especially when considering mathematical models of agency (as mentioned rationalists usually use argmax + Bayes).
I think this is as far away from truth as it can possibly be.
This post takes a funky left turn at the end, making it a lesson that forming accurate beliefs requires observations. That's a strange conclusion because that also applies to systems where thermodynamics doesn't hold.
Also, conservation of energy is a consequence of pretty much simple and nice properties of environment, not arbitrary.
Conservation of energy doesn't just follow from time symmetry (as otherwise it would be pretty nice). It follows from time symmetry combined with either Lagrangian/Hamiltonian mechanics or quantum mechanics. There's several problems here:
I think you are wrong in purely practical sense. We don't care about most of energy. Oceans have a lot of energy in them, but we don't care because 99%+ of it is unavailable, because it is in high-entropy state. We care about exploitation of free energy, which is present only low-entropy high-information states. And, as expected, we learn to notice such states very quickly because they are very cheap sources of uncertainty reduction in world model.
It's true that free energy is especially important, but I'm unconvinced rationalists jump as strongly onto it as you say. Free energy is pretty cheap, so between your power outlet and your snack cabinet you are pretty unconstrained by it.
I think this ties into modeling invariant abstractions of objects, and coming up with models that generalize to probable future states.
I think partly this is addressed in animals (including humans) by having a fraction of their brain devoted to predicting future sensations and forming a world model out of received sensations, but also having an action model that attempts to influence the world and self-models its own actions and their effects. So things like the cubes, we learn a model of the motions of the cubes not just from watching video of them, but by stacking them up and knocking them over. We play and explore, and these manipulations allow us to test hypotheses.
I expect that having a portion of a model's training be interactive exploration of a simulation would help close this gap.
The thing is, your actions can lead to additional scratches to the cubes, so actions aren't causally separated from scratches. And the scratches will be visible on future states too, so if your model attempts to predict future states, it will attempt to predict the scratches.
I suspect ultimately one needs to have an explicit bias in favor of modelling large things accurately. Actions can help nail down the size comparisons, but they don't directly force you to focus on the larger things.
While I agree that physical laws like conservation of energy are extremely arbitrary from a computational standpoint, I do think that once we try to exhaust all the why questions of why our universe has the physical laws and constants that it does, a lot of the answer is "it's arbitrary, and we just happen to live in this universe instead of a different one.
Also, about this point in particular:
It captures a lot of unimportant information, which makes the models more unweildy. Really, information is a cost: the point of a map is not to faithfully reflect the territory, because that would make it really expensive to read the map. Rather, the point of a map is to give the simplest way of thinking about the most important features of the territory. For instance, literal maps often use flat colors (low information!) to represent different kinds of terrain (important factors!).
Yeah, this is probably one of the biggest differences that comes up between idealized notions of computation/intelligence like AIXI (at the weak end) and The Universal Hypercomputer model in their paper The Universal Hypercomputer (at the strong end) and real agents because of computation costs.
For idealized agents, they can often treat their maps as equivalent to a given territory, at least with full simulation/computation, while real agents must have differences between the map and the territory they're trying to model, so the saying "the map is not the territory" is true for us.
I think a lot of this post can be boiled down to "Computationalism does not scale down well, and thus it's not generally useful to try to capture all the information that is reasonably non-independent of other information, even if it's philosophically correct to be a computationalist."
And yeah, this is extremely unsurprising: Even theoretically correct models/philosophies can often be intractable to actually implement, so you have to look for approximations or use a different theory, even if not philosophically/mathematically justified in the limit.
And yeah, trying to have a prior over all conceivable computations is ridiculously intractable, especially if we want the computational model to be very expressive/general like these computational models, with abstracts. primarily due to the fact that it can express almost everything in theory (ignore their physical plausibility for now, because this isn't intended to show we can actually build these):
https://arxiv.org/abs/1806.08747
This paper describes a type of infinitary computer (a hypercomputer) capable of computing truth in initial levels of the set theoretic universe, V. The proper class of such hypercomputers is called a universal hypercomputer. There are two basic variants of hypercomputer: a serial hypercomputer and a parallel hypercomputer. The set of computable functions of the two variants is identical but the parallel hypercomputer is in general faster than a serial hypercomputer (as measured by an ordinal complexity measure). Insights into set theory using information theory and a universal hypercomputer are possible, and it is argued that the Generalised Continuum Hypothesis can be regarded as a information-theoretic principle, which follows from an information minimisation principle.
This paper surveys a wide range of proposed hypermachines, examining the resources that they require and the capabilities that they possess.
https://arxiv.org/abs/math/0209332
Due to common misconceptions about the Church-Turing thesis, it has been widely assumed that the Turing machine provides an upper bound on what is computable. This is not so. The new field of hypercomputation studies models of computation that can compute more than the Turing machine and addresses their implications. In this report, I survey much of the work that has been done on hypercomputation, explaining how such non-classical models fit into the classical theory of computation and comparing their relative powers. I also examine the physical requirements for such machines to be constructible and the kinds of hypercomputation that may be possible within the universe. Finally, I show how the possibility of hypercomputation weakens the impact of Godel's Incompleteness Theorem and Chaitin's discovery of 'randomness' within arithmetic.
So yes, it is ridiculously intractable to focus on the class of all computational experiences ever, as well as their non-independent information.
So my guess is you're looking for a tractable model of the agent-like structure problem while still being very general, but willing to put restrictions on it's generality.
Is that right?
So my guess is you're looking for a tractable model of the agent-like structure problem while still being very general, but willing to put restrictions on it's generality.
Is that right?
I think everyone is doing that, my point is more about what the appropriate notion of approximation is. Most people think the appropriate notion of approximation is something like KL-divergence, and I've discovered that to be false and that information-based definitions of "approximation" don't work.
The agent-like structure problem is a question about how agents in the world are structured. I think rationalists generally have an intuition that the answer looks something like the following:
There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.). I think rationalists mostly feel comfortable with that because:
I've come to think of this as "the computationalist worldview" because functional input/output relationships are the thing that is described very well with computations, whereas laws like conservation of energy are extremely arbitrary from a computationalist point of view. (This should be obvious if you've ever tried writing a simulation of physics, as naive implementations often lead to energy exploding.)
Radical computationalism is killed by information overload
Under the most radical forms of computationalism, the "ideal" prior is something that can range over all conceivable computations. The traditional answer to this is Solomonoff induction, but it is not computationally tractable because it has to process all observed information in every conceivable way.
Recently with the success of deep learning and the bitter lesson and the Bayesian interpretations of deep double descent and all that, I think computationalists have switched to viewing the ideal prior as something like a huge deep neural network, which learns representations of the world and functional relationships which can be used by some sort of decision-making process.
Briefly, the issue with these sorts of models is that they work by trying to capture all the information that is reasonably non-independent of other information (for instance, the information in a picture that is relevant for predicting information in future pictures). From a computationalist point of view, that may seem reasonable since this is the information that the functional relationships are about, but outside of computationalism we end up facing two problems:
To some extent, human-provided priors (e.g. labels) can reduce these problems, but that doesn't seem scalable, and really humans also sometimes struggle with these problems too. Plus, philosophically, this would kind of abandon radical computationalism.
"Energy"-orientation solves information overload
I'm not sure to what extent we merely need to focus on literal energy versus also on various metaphorical kinds of energy like "vitality", but let me set up an example of a case where we can just consider literal energy:
Suppose you have a bunch of physical cubes whose dynamics you want to model. Realistically, you just want the rigid-body dynamics of the cubes. But if your models are supposed to capture information, then they have to model all sorts of weird stuff like scratches to the cubes, complicated lighting scenarios, etc.. Arguably, more of the information about (videos of) the cubes may be in these things than in the rigid-body dynamics (which can be described using only a handful of numbers).
The standard approach is to say that the rigid-body dynamics constitute a low-dimensional component that accounts for the biggest chunk of the dynamics. But anecdotally this seems very fiddly and basically self-contradictory (you're trying to simultaneously maximize and minimize information, admittedly in different parts of the model, but still). The real problem is that scratches and lighting and so on are "small" in absolute physical terms, even if they carry a lot of information. E.g. the mass displaced in a scratch is orders of magnitude smaller than the mass of a cube, and the energy in weird light phenomena is smaller than the energy of the cubes (at least if we count mass-energy).
So probably we want representation that maximizes the correlation with the energy of the system, at least moreso than we want a representation that maximizes the mutual information with observations of the system.
... kinda
The issue is that we can't just tell a neural network to model the energy in a bunch of pictures, because it doesn't have access to the ground truth. Maybe by using the correct loss function, we could fix it, but I'm not sure about that, and at the very least it is unproven so far.
I think another possibility is that there's something fundamentally wrong with this framing:
As humans, we have a natural concept of e.g. force and energy because we can use our muscles to apply a force, and we take in energy through food. That is, our input/output channels are not simply about information, and instead they also cover energetic dynamics.
This can, technically speaking, be modelled with the computationalist approach. You can say the agent has uncertainty over the size of the effects of its actions, and as it learns to model these effect sizes, it gets information about energy. But actually formalizing this would require quite complex derivations with a recursive structure based on the value of information, so it's unclear what would happen, and the computationalist approach really isn't mathematically oriented towards making it easy.