The stronger version is: EUT is inadequate as a theory of agents (for the same reasons, and in the same ways) during an agent's "growing up" period as well as all the time. I think the latter is the case for several reasons, for example:
- agents get exposed to novel "ontological entities" continuously (that e.g. they haven't yet formed evaluative stances with respect to), and not just while "growing up"
- there is a (generative) logic that governs how an agent "grows up" (develops into a "proper agent"), and that same logic continues to apply throughout an agent's lifespan
I think this is a very important point; my post on value systematization is a (very early) attempt to gesture towards what an agent "growing up" might look like.
Yeah neat, I haven't yet gotten to reading it but is definitely on my list. Seems (and some folks suggested to me) that it's quite related to the sort of thing I'm discussing in value change problem too.
There are some similarities, although I'm focusing on AI values not human values. Also, seems like the value change stuff is thinking about humanity on the level of an overall society, whereas I'm thinking about value systematization mostly on the level of an individual AI agent. (Of course, widespread deployment of an agent could have a significant effect on its values, if it continues to be updated. But I'm mainly focusing on the internal factors.)
There is another sense in which I would not want to say that there is any particular hierarchy between natural/unnatural/rational constraints.
I think there's a lot to unpack here. I'm going to give it a preliminary go, anticipating that it's likely to be to a bit all over the place. The main thread I want to pull is what it means to impose a particular hierarchy between the constraints, and then see how this leads to many possible hierarchies in such a way that it feels like no particular hierarchy is privileged.
From a "natural" point of view, which privileges physical time, individuation is something that must be explained - a puzzle which is at the heart of the mystery of the origin of life. From this point of view, "rationality" or "coherence" is also something that must be explained (which is what Richard Ngo is gesturing out in his comment / post).
From a "rational" point of view, we can posit abstract criteria which we want our model of agency to fulfil. For instance Logical Induction (Garrabrant et al. 2016), takes a formalisation of the following desideratum (named "Approximate Inexploitability", or "The Logical Induction Criterion"): "It should not be possible to run a Dutch book against a good reasoner in practice." (ibid, p.8, p.14), and then constructs an agent entirely within logic from this. Something like "rationality" or "coherence" is assumed (for well argued reasons), and the structure of agency is deduced from there. This kind of move is also what underpins selection theorems. In my view, individuation also needs to be explained here, but it's often simply assumed (much like it is in most of theoretical biology).
The "unnatural" point of view is much more mysterious to us. When I use the term, I want to suggest that individuation can be assumed, but physical time becomes something that must be explained. This is a puzzle which is touched on in esoteric areas of physics (e.g. "A smooth exit from eternal inflation?" - Hawking and Hertog 2018), and consciousness science (e.g. "A measure for intrinsic information" - Barbosa et al. 2020), and discussed in "religious" or "spiritual" contexts, but in reality very poorly understood. I think you gesture at a really interesting perspective on this by relating it to "thinghood" in active inference - but to me this misses a lot of what makes this truly weird - the reasons I decided to label it "unnatural" in the first place.
It's deeply confusing to me at this stage how the "unnatural" point of view relates to the "rational" one, I'd be curious to hear any thoughts on this, however speculative. I do, however, think that there is a sense in which none of the three hierarchies I'm gesturing at in this comment are "the real thing" - they more feel like prisms through which we can diffract the truth in an attempt to break it down into manageable components.
bounded, embedded, enactive, nested.
I know about boundedness, embededness, and I guess nestedness is about hierarchical agents.
But what's enactive?
Roughly... refers to/emphasizes the dynamic interaction between agent and environment and understands behavior/cognition/agency/... to emerge through that interaction/at that interface (rather than, e.g, trying to understand them as an internal property of the agent only)
Meta: