I disagree that there is a MPS or a LPS, that preferences are consistent over time, and also disagree that actual agent actions maximize over reachable state preferences. Even fairly simple real agents are much messier than that, and meta-agents a snarl of immense complexity beyond that.
It may be to some extent a decent first cut at simplifying things enough to start to explore the ideas and make (initially poor) predictions about how agents might behave, but not suitable as a shared foundation to build everything else on.
You disagree with MPS/LPS in what way? In that there are cyclical states or in that it is impossible to rate states against each other or something else?
Completely agree that preferences are not consistent over time but I'm not sure about the relevance of that here.
Agent actions definitely do not maximize over reachable state preferences. My only point there was that they make some attempt to improve states. If you disagree with that what would be an example? Totally agree with your point that it can get very messy.
You could get your framework by adapting existing frameworks to fit your meta-agent utility function. Examples:
I think in the end you would get stuck on the unsolved problem of balancing the needs of individuals and the collective.
Correct me if I'm wrong but those are all ethical frameworks rather than meta-ethocal frameworks? My post was an attempt to create a framework with which to discuss those, not to be an alternative to those.
Let’s start with some definitions first to make sure that we are all on the same page. I have no idea what the formal definitions are in this space but hopefully this will be enough for mutual understanding.
Agent: Anything that can (seemingly) act to affect the universe around them. Disregard questions of determinism/free-will as they relate here but shouldn’t matter in this context.
Meta-Agent: An agent that is self-aware. In other words: an agent that recognizes its own agent-hood.
State: A configuration of the universe at a given time.
Most Preferred State (MPS): A state in which they would not choose (rank higher) any other state.
Least Preferred State (LPS): A state in which they would choose (rank higher) any other state.
My claims:
OK this is great and all but why should we care?
I’ve never seen anyone lay out this sort of framework. Someone may have done it but if they have it certainly is not well known. And consequently I find that when you view ethical/moral questions through this framework a lot of those questions become either trivial to answer or meaningless. For example let’s take a look at the definition of Moral Universalism per Wikipedia:
What is hopefully clear here is that this moral framework only makes sense if you consider a set of meta-agents that have very similar state preferences. It cannot apply broadly since the set of all meta-agents includes things such as meta-agents that have reverse state orderings to one another (and other such incompatible things).
Anyways, this is my small attempt at putting this out into the world and opening it up to discussion. Thanks for reading!
Regardless of if you agree that it is more useful everyone should be very careful to clarify which set of meta-agents you are discussing (not everyone’s assumptions will be the same)