Sorry for the late response.
If people change their own preferences by repetition and practice, then they usually have a preference to do that. So it can be in their own best interests, for preferences they already have.
I could have a preference to change your preferences, and that could matter in the same way, but I don’t think I should say it's in your best interests (at least not for the thought experiment in this post). It could be in my best interests, or for whatever other goal I have (possibly altruistic).
In my view, identity preservation is vague and degreed, a matter of how much you inherit from your past "self", specifically how much of your memories and other dispositions.
Someone could fail to report a unique precise prior (and one that's consistent with their other beliefs and priors across contexts) for any of the following reasons, which seem worth distinguishing:
I'd be inclined to treat all three cases like imprecise probabilities, e.g. I wouldn't permanently commit to a prior I wrote down to the exclusion of all other priors over the same events/possibilities.
Harsanyi's theorem has also been generalized in various ways without the rationality axioms; see McCarthy et al., 2020 https://doi.org/10.1016/j.jmateco.2020.01.001. But it still assumes something similar to but weaker than the independence axiom, which in my view is hard to motivate separately.
Why do you believe AMD and Google make better hardware than Nvidia?
If bounded below, you can just shift up to make it positive. But the geometric expected utility order is not preserved under shifts.
Violating the Continuity Axiom is bad because it allows you to be money pumped.
Violations of continuity aren't really vulnerable to proper/standard money pumps. The author calls it "arbitrarily close to pure exploitation" but that's not pure exploitation. It's only really compelling if you assume a weaker version of continuity in the first place, but you can just deny that.
I think transitivity (+independence of irrelevant alternatives) and countable independence (or the countable sure-thing principle) are enough to avoid money pumps, and I expect give a kind of expected utility maximization form (combining McCarthy et al., 2019 and Russell & Isaacs, 2021).
Against the requirement of completeness (or the specific money pump argument for it by Gustafsson in your link), see Thornley here.
To be clear, countable independence implies your utilities are "bounded" in a sense, but possibly lexicographic. See Russell & Isaacs, 2021.
Even if we instead assume that by ‘unconditional’, people mean something like ‘resilient to most conditions that might come up for a pair of humans’, my impression is that this is still too rare to warrant being the main point on the love-conditionality scale that we recognize.
I wouldn't be surprised if this isn't that rare for parents for their children. Barring their children doing horrible things (which is rare), I'd guess most parents would love their children unconditionally, or at least claim to. Most would tolerate bad but not horrible. And many will still love children who do horrible things. Partly this could be out of their sense of responsibility as a parent or attachment to the past.
I suspect such unconditional love between romantic partners and friends is rarer, though, and a concept of mid-conditional love like yours could be more useful there.
Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.
I would think totally unconditional love for a specific individual is allowed to be conditional on facts necessary to preserve their personal identity, which could be vague/fuzzy. If your partner asks you if you'd still love them if they were a worm and you do love them totally unconditionally, the answer should be yes, assuming they could really be a worm, at least logically. This wouldn't require you to love all worms. But you could also deny the hypothesis if they couldn't be a worm, even logically, in case a worm can't inherit their identity from a human.
That being said, I'd also guess that love is very rarely totally unconditional in this way. I think very few would continue to love someone who tortures them and others they care about. I wouldn't be surprised if many people (>0.1%, maybe even >1% of people) would continue to love someone after that person turned into a worm, assuming they believed their partner's identity would be preserved.
It's conceivable how the characters/words are used across English and Alienese have a strong enough correspondence that you can guess matching words much better than chance. But, I'm not confident that you'd have high accuracy.
Consider encryption. If you encrypted messages by mapping the same character to the same character each time, e.g. 'd' always gets mapped to '6', then this can be broken with decent accuracy by comparing frequency statistics of characters in your messages with the frequency statistics of characters in the English language.
If you mapped whole words to strings instead of character to character, you could use frequency statistics for whole words in the English language.
Then, between languages, this mostly gets way harder, but you might be able to make some informed guesses, based on
An AI might use similar facts or others, and many more, about much fine-grained and specific uses of words and associations, to guess, but I’m not sure an LLM token predictor mostly just trained on both languages in particular would do a good job.
EDIT: Unsupervised machine translation as Steven Byrnes pointed out seems to be on a better track.
Also, I would add that LLMs trained without perception of things other than text don't really understand language. The meanings of the words aren't grounded, and I imagine it could be possible to swap some in a way that would mostly preserve the associations (nearly isomorphic), but I’m not sure.
What do you mean by difference here? Increase in performance due to consciousness? Or differences in functions?
I'm not sure we could measure this difference. It seems very likely to me that consciousness evolved before, say, language and complex agency. But complex language and complex agency might not require consciousness, and may capture all of the benefits that would be captured by consciousness, so consciousness wouldn't result in greater performance.
However, it could be that
Some other possibilities: