Loosely related to this, it would be nice to know if systems which reliably don't turn down 'free money' must necessarily have almost-magical levels of internal coordination or centralization. If the only things which can't (be tricked into) turn(ing) down free money when the next T seconds of trade offers are known are Matrioshka brains at most T light-seconds wide, does that tell us anything useful about the limits of that facet of dominance as a measure of agency?
I'm not convinced that the specifics of "why" someone might consider themselves a plural smeared across a multiverse are irrelevant. MWI and the dynamics of evolving amplitude are a straightforward implication of the foundational math of a highly predictive theory, whereas the different flavors of classical multiverse are a bit harder to justify as "likely to be real", and also harder to be confident about any implications.
If I do the electron-spin thing I can be fairly confident of the future existence of a thing-which-claims-to-be-me experiencing both outcomes as well as my relative likelihood of "becoming" each one, but if I'm in a classical multiverse doing a coin flip then perhaps my future experiences are contingent on whether the Boltzmann-brain-emulator running on the grand Kolmogorov-brute-forcing hypercomputer is biased against tails (that's not to say I can make use of any of that to make a better prediction about the coin, but it does mean upon seeing heads that I can conclude approximately nothing about any "me"s running around that saw tails).
If I push the classical uncertainty into the past by, say, shaking a box with the coin inside and sticking it in a storage locker and waiting a year (or seeding a PRNG a year ago and consulting that) then even though the initial event might have branched nicely, right now that cluster of sufficiently-similar Everett branches are facing the same situation in the original question, right? Assuming enough chaotic time has passed that the various branches from the original random event aren't using that randomness for the same reason.
I understand from things like this that it doesn't take a lot of (classical) uncertainty or a lot of time for a system to become unpredictable at scale, but for me that pushes the question down to annoying concrete follow-ups like:
For me the only "obvious" takeaway from this re. quantum immortality is that you should be more willing to play quantum Russian roulette than classical Russian roulette. Beyond that, the topic seems like something where you could get insights by just Sitting Down and Doing The Math, but I'm not good enough at math to do the math.
...wait, you were just asking for an example of an agent being "incoherent but not dominated" in those two senses of being money-pumped? And this is an exercise meant to hint that such "incoherent" agents are always dominatable?
I continue to not see the problem, because the obvious examples don't work. If I have as incomparable to that doesn't mean I turn down the trade of (which I assume is what you're hinting at re. foregoing free money).
If one then says "ah but if I offer $9999 and you turn that down, then we have identified your secret equivalent utili-" no, this is just a bid/ask spread, and I'm pretty sure plenty of ink has been spilled justifying EUM agents using uncertainty to price inaction like this.
What's an example of a non-EUM agent turning down free money which doesn't just reduce to comparing against an EUM with reckless preferences/a low price of uncertainty?
Hmm, I was going to reply with something like "money-pumps don't just say something about adversarial environments, they also say something about avoiding leaking resources" (e.g. if you have circular preferences between proximity to apples, bananas, and carrots, then if you encounter all three of them in a single room you might get trapped walking between them forever) but that's also begging your original question - we can always just update to enjoy leaking resources, transmuting a "leak" into an "expenditure".
Another frame here is that if you make/encounter an agent, and that agent self-modifies into/starts off as something which is happy to leak pretty fundamental resources like time and energy and material-under-control, then you're not as worried about it? It's certainly not competing as strongly for the same resources as you whenever it's "under the influence" of its circular preferences.
(I'm not EJT, but for what it's worth:)
I find the money-pumping arguments compelling not as normative arguments about what preferences are "allowed", but as engineering/security/survival arguments about what properties of preferences are necessary for them to be stable against an adversarial environment (which is distinct from what properties are sufficient for them to be stable, and possibly distinct from questions of self-modification).
Here in Australia I can only buy the paperback/hardcover versions. Any chance you can convince your publisher/publishers to release the e-book here too?