Wiki Contributions

Comments

Sorted by
eapi10

the order-dimension of its preference graph is not 1 / it passes up certain gains

If the order dimension is 1, then the graph is a total order, right? Why the conceptual detour here?

eapi10

Loosely related to this, it would be nice to know if systems which reliably don't turn down 'free money' must necessarily have almost-magical levels of internal coordination or centralization. If the only things which can't (be tricked into) turn(ing) down free money when the next T seconds of trade offers are known are Matrioshka brains at most T light-seconds wide, does that tell us anything useful about the limits of that facet of dominance as a measure of agency?

eapi10

I'm not convinced that the specifics of "why" someone might consider themselves a plural smeared across a multiverse are irrelevant. MWI and the dynamics of evolving amplitude are a straightforward implication of the foundational math of a highly predictive theory, whereas the different flavors of classical multiverse are a bit harder to justify as "likely to be real", and also harder to be confident about any implications.

If I do the electron-spin thing I can be fairly confident of the future existence of a thing-which-claims-to-be-me experiencing both outcomes as well as my relative likelihood of "becoming" each one, but if I'm in a classical multiverse doing a coin flip then perhaps my future experiences are contingent on whether the Boltzmann-brain-emulator running on the grand Kolmogorov-brute-forcing hypercomputer is biased against tails (that's not to say I can make use of any of that to make a better prediction about the coin, but it does mean upon seeing heads that I can conclude approximately nothing about any "me"s running around that saw tails).

eapi10

If I push the classical uncertainty into the past by, say, shaking a box with the coin inside and sticking it in a storage locker and waiting a year (or seeding a PRNG a year ago and consulting that) then even though the initial event might have branched nicely, right now that cluster of sufficiently-similar Everett branches are facing the same situation in the original question, right? Assuming enough chaotic time has passed that the various branches from the original random event aren't using that randomness for the same reason.

eapi10

I understand from things like this that it doesn't take a lot of (classical) uncertainty or a lot of time for a system to become unpredictable at scale, but for me that pushes the question down to annoying concrete follow-ups like:

  • My brain and arm muscles have thermal noise, but they must be somewhat resilient to noise, so how long does it take for noise at one scale (e.g. ATP in a given neuron) to be observable at another scale (e.g. which word I say, what thought I have, how my arm muscle moves)?
  • More generally, how effective are "noise control" mechanisms like reducing degrees of freedom? E.g. while I can imagine there's enough chaos around a coin flip for quantum noise to affect thermal noise to affect macro outcomes, it's not as obvious to me that that's true for a spinner in a board game where the main (only?) relevant macro parameter affected by me is angular momentum of the spinner.
eapi1-1

For me the only "obvious" takeaway from this re. quantum immortality is that you should be more willing to play quantum Russian roulette than classical Russian roulette. Beyond that, the topic seems like something where you could get insights by just Sitting Down and Doing The Math, but I'm not good enough at math to do the math.

eapi65

...wait, you were just asking for an example of an agent being "incoherent but not dominated" in those two senses of being money-pumped? And this is an exercise meant to hint that such "incoherent" agents are always dominatable?

I continue to not see the problem, because the obvious examples don't work. If I have  as incomparable to  that doesn't mean I turn down the trade of  (which I assume is what you're hinting at re. foregoing free money).

If one then says "ah but if I offer $9999 and you turn that down, then we have identified your secret equivalent utili-" no, this is just a bid/ask spread, and I'm pretty sure plenty of ink has been spilled justifying EUM agents using uncertainty to price inaction like this.

What's an example of a non-EUM agent turning down free money which doesn't just reduce to comparing against an EUM with reckless preferences/a low price of uncertainty?

eapi31

Hmm, I was going to reply with something like "money-pumps don't just say something about adversarial environments, they also say something about avoiding leaking resources" (e.g. if you have circular preferences between proximity to apples, bananas, and carrots, then if you encounter all three of them in a single room you might get trapped walking between them forever) but that's also begging your original question - we can always just update to enjoy leaking resources, transmuting a "leak" into an "expenditure".

Another frame here is that if you make/encounter an agent, and that agent self-modifies into/starts off as something which is happy to leak pretty fundamental resources like time and energy and material-under-control, then you're not as worried about it? It's certainly not competing as strongly for the same resources as you whenever it's "under the influence" of its circular preferences.

eapi10

(I'm not EJT, but for what it's worth:)

I find the money-pumping arguments compelling not as normative arguments about what preferences are "allowed", but as engineering/security/survival arguments about what properties of preferences are necessary for them to be stable against an adversarial environment (which is distinct from what properties are sufficient for them to be stable, and possibly distinct from questions of self-modification).

eapi21

The rock doesn't seem like a useful example here. The rock is "incoherent and not dominated" if you view it as having no preferences and hence never acting out of indifference, it's "coherent and not dominated" if you view it as having a constant utility function and hence never acting out of indifference, OK, I guess the rock is just a fancy Rorschach test.

IIUC a prototypical Slightly Complicated utility-maximizing agent is one with, say, , and a prototypical Slightly Complicated not-obviously-pumpable non-utility-maximizing agent is one with, say, the partial order  plus the path-dependent rule that EJT talks about in the post (Ah yes, non-pumpable non-EU agents might have higher complexity! Is that relevant to the point you're making?).

What's the competitive advantage of the EU agent? If I put them both in a sandbox universe and crank up their intelligence, how does the EU agent eat the non-EU agent? How confident are you that that is what must occur?

Load More