CEV is group level relativism, not objectivism.
I think Eliezer's attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as "moralities" - that's the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
These issues matter not just for human altruism but also for AI value systems. If an AI takeover occurs and if the AI(s) care about the welfare of other beings at all, they will have to make judgements about which entities even have a well-being to care about, and they will also have to make judgements about how to aggregate all these individual welfares (for the purpose of decision-making). Even just from a self-interested perspective, moral relativism is not enough here, because in the event of AI takeover, you the human individual will be on the receiving end of AI decisions. It would be good to have a proposal for AI value system that is both safe for you the individual, and also appealing enough to people in general, that it has a chance of actually being implemented.
Meanwhile, the CEV philosophy tilts towards moral objectivism. It is supposed that the human brain implicitly follows some decision procedure specific to our species, that this encompasses what we call moral decisions, and that the true moral ideal of humanity would be found by applying this decision procedure to itself ("our wish if we knew more, thought faster, were more the people we wished we were", etc). It is not beyond imagining that if you took a brain-based value system like PRISM (LW discussion), and "renormalized" it according to a CEV procedure, that it would output a definite standard for comparison and aggregation of different welfares.
This all seems of fundamental importance if we want to actually understand what our AIs are.
Over the course of post-training, models acquire beliefs about themselves. 'I am a large language model, trained by…' And rather than trying to predict/simulate whatever generating process they think has written the preceding context, they start to fall into consistent persona basins. At the surface level, they become the helpful, harmless, honest assistants they've been trained to be.
I always thought of personas as created mostly by the system prompt, but I suppose RLHF can massively affect their personalities as well...
You can't actually presume that... The relevant quantum concept is the "spectrum" of an observable. These are the possible values that a property can take (eigenvalues of the corresponding operator). An observable can have a finite number of allowed eigenvalues (e.g. spin of a particle), a countably infinite number (e.g. energy levels of an oscillator), or it can have a continuous spectrum, e.g. position of a free particle. But the latter case causes problems for the usual quantum axioms, which involve a Hilbert space with a countably infinite number of dimensions - there aren't enough dimensions to represent an uncountable number of distinct position eigenstates. You have to add extra structure to include them, and concrete applications always involve integrals over continua of these generalized eigenstates, so one might reasonably suppose that the "ontological basis" with respect to which branching is defined is something countable. In fact, I don't remember ever seeing a many-worlds ontological interpretation of the generalized eigenstates or the formalism that deals with them (e.g. rigged Hilbert space).
In any case, the counterpart of branch counting for a continuum is simply integration. If you really did have uncountably many branches, you would just need a measure. The really difficult case may actually be when you have a countably infinite number of branches, because there's no uniform measure in that case (I suppose you could use literal infinitesimals, the equivalent of "1/alephzero").
There is no agreed-upon test for consciousness because there is no agreed-upon theory for consciousness.
There are people here who believe current AI is probably conscious, e.g. @JenniferRM and @the gears to ascension. I don't believe it but that's because I think consciousness is probably based on something physical like quantum entanglement. People like Eliezer may be cautiously agnostic on the topic of whether AI has achieved consciousness. You say you have your own theories, so, welcome to the club of people who have theories!
Sabine Hossenfelder has a recent video on Tiktokkers who think they are awakening souls in ChatGPT by giving it roleplaying prompts.