ata comments on Poll: What value extra copies? - Less Wrong

5 [deleted] 22 June 2010 12:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread.

Comment author: ata 23 June 2010 02:28:00AM *  2 points [-]

I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.

* E. T. Jaynes says we can't do inference in infinite sets except those that are defined as well-behaved limits of finite sets, but if we're living in an infinite set, then there has to be some right answer, and some best method of approximating it. I have no idea what that method is.

So. My moral intuition says that creating an identical non-interacting copy of me, with no need for or possibility of it serving as a backup, is valued at 0. As for consequentialism... if this were valued even slightly, I'd get one of those quantum random number generator dongles, have it generate my desktop wallpaper every few seconds (thereby constantly creating zillions of new slightly-different versions of my brain in their own Everett branches), and start raking in utilons. Considering that this seems not just emotionally neutral but useless to me, my consequentialism seems to agree with my emotivist intuition.

Comment deleted 23 June 2010 10:03:24AM *  [-]
Comment author: ata 23 June 2010 09:52:17PM *  0 points [-]

Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.

My argument for that is essentially structured as a dissolution of "existence", an answer to the question "Why do I think I exist?" instead of "Why do I exist?". Whatever facts are related to one's feeling of existence — all the neurological processes that lead to one's lips moving and saying "I think therefore I am", and the physical processes underlying all of that — would still be true as subjunctive facts about a hypothetical mathematical structure. A brain doesn't have some special existence-detector that goes off if it's in the "real" universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.

This seems like a genuinely satisfying dissolution to me — "Why does anything exist?" honestly doesn't feel intractably mysterious to me anymore — but even ignoring that argument and starting only with Occam's Razor, the Level IV Multiverse is much more probable than this particular universe. Even so, specific rational evidence for it would be nice; I'm still working on figuring out what qualify as such.

There may be some. First, it would anthropically explain why this universe's laws and constants appear to be well-suited to complex structures including observers. There doesn't have to be any The Universe that happens to be fine-tuned for us; instead, tautologically, we only find ourselves existing in universes in which we can exist. Similarly, according to Tegmark, physical geometries with three non-compactified spatial dimensions and one time dimension are uniquely well-suited to observers, so we find ourselves in a structure with those qualities.

Anyway, yeah, I think there are some good reasons to believe (or at least investigate) it, plus some things that still confuse me (which I've mentioned elsewhere in this thread and in the last section of my post about it), including the aforementioned "infinite ethics problem of awesome magnitude".

Comment deleted 24 June 2010 11:35:58AM [-]
Comment author: Vladimir_Nesov 24 June 2010 11:46:44AM *  0 points [-]

Measure doesn't help if each action has all possible consequences: you'd just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and "mathematically crisp" dependence between actions and consequences.

Comment deleted 24 June 2010 12:01:16PM *  [-]
Comment author: Vladimir_Nesov 24 June 2010 12:18:08PM *  0 points [-]

There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it's not. (That is, I don't necessarily disagree, but don't want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)

Comment deleted 24 June 2010 01:03:14PM *  [-]
Comment author: Vladimir_Nesov 24 June 2010 01:39:39PM *  0 points [-]

I don't see what the problem is with using measures over world histories as a solution to the problem of predictability.

It's not a generally valid solution (there are solutions that don't use measures), though it's a great solution for most purposes. It's just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.

You said:

Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future

Comment deleted 23 June 2010 09:46:02AM *  [-]
Comment author: ata 23 June 2010 09:08:42PM *  1 point [-]

Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders).

I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark's multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.

I'm not yet confident enough in any of this to say that I've "taken the red pill", but since, to be honest, that originally felt like something I really really didn't want to believe, I've been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I'm to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.

And even if I'm really only doing this because it feels good to me... well, then I'd still do it.

Comment deleted 23 June 2010 10:29:08PM [-]
Comment author: ata 23 June 2010 10:34:13PM 0 points [-]

One concrete problem is that we might be able to acausally influence other parts of the multiverse.

Could you elaborate on that?

Comment deleted 23 June 2010 10:38:15PM [-]
Comment author: AlephNeil 24 June 2010 10:42:39AM 0 points [-]

We might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.

How? And how would we know if our threats were effective?

Comment deleted 24 June 2010 11:07:12AM [-]
Comment author: AlephNeil 24 June 2010 11:30:30AM 0 points [-]

Ah, I see.

Having a 'limited sphere of consequence' is actually one of the core ideas of deontology (though of course they don't put it quite like that).

Speaking for myself, although it does seem like an ugly hack, I can't see any other way of escaping the paranoia of "Pascal's Mugging".

Comment deleted 24 June 2010 11:47:33PM [-]
Comment author: ata 23 June 2010 10:52:19PM *  0 points [-]

Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say "Hey, whoever's simulating this realty, you'd better do x or we'll simulate your reality and torture all of you!", followed by us believing them, not realizing that it doesn't work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there's no way for people in different realities to threaten each other if they mutually understand that. If you're simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you're just switching to simulating a different structure. You can push the "torture" button, and you'll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.

Comment author: Vladimir_Nesov 24 June 2010 10:05:41AM 2 points [-]

You don't grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.

Take a "universal log program", for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn't take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.

Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don't take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.

Comment author: AlephNeil 24 June 2010 11:05:00AM 1 point [-]

OK, so you're saying that A, a human in 'the real world', acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.

I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the 'output log' of each depends on the 'Platonic' result of a common computation - in this case the computation where A's brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that 'Platonic' computation.

Now if you identify "yourself" with the abstract computation then you can say that "you" are controlling both the world and P. But then aren't you an 'inhabitant' of P just as much as you're an inhabitant of the world? On the other hand, if you specifically identify "yourself" with a particular chunk of "the real world" then it seems a bit misleading to say that "you" ambiently control P, given that "you" are yourself ambiently controlled by the abstract computation which is controlling P.

Perhaps this is only a 'semantic quibble' but in any case I can't see how ambient control gets us any nearer to being able to say that we can threaten 'parallel worlds' causally disjoint from "the real world", or receive responses or threats in return.

Comment author: Vladimir_Nesov 24 June 2010 11:27:32AM *  0 points [-]

Now if you identify "yourself" with the abstract computation then you can say that "you" are controlling both the world and P. But then aren't you an 'inhabitant' of P just as much as you're an inhabitant of the world?

Sure, you can read it this way, but keep in mind that P is very simple, doesn't have you as explicit "part", and you'd need to work hard to find the way in which you control its output (find a dependence). This dependence doesn't have to be found in order to compute P, this is something external, they way you interpret P.

I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control "your own" world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of "your own world" specifies you as part explicitly, while to "find yourself" in a "causally unconnected world", you need to do a fair bit of inference.

Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won't allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most "causally unconnected" worlds: have them analyze P.

When a world program isn't presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.