Comment author: shminux 14 February 2014 10:49:15PM *  3 points [-]

Imagine this: Once you finish reading this article, you hear a bell ringing, and then a sonorous voice announces: "You do indeed live in a Tegmark IV multiverse without a measure. You had better deal with it." And then it turns out that it's not just you who's heard that voice: Every single human being on the planet (who didn't sleep through it, isn't deaf etc.) has heard those same words.

Suppose instead the same voice says "You do not live in a Tegmark IV multiverse". Wouldn't you still conclude that you do, anyway? It is still a "disorderly experience", isn't it?

Comment author: Benja 14 February 2014 10:56:20PM 1 point [-]

Yup, sure.

Comment author: FeepingCreature 14 February 2014 10:04:35PM 1 point [-]

So you don't know that you live in a simple world. But, goes the obvious reply, you care much more about what happens if you do happen to live in the simple world.

You kind of seem to jump around there. Our world looks simple, that's why we're worrying so much about why our world looks so simple in the first place! Sure our world might not actually be simple, but we simply have no sufficient reason to distrust the copious simplicity our scientific inquiry seems to yield.

If I live in a simple world, I want to believe I live in a simple world. If I live in a complex, interventionistic world I want to believe I live in a complex, interventionistic world. The way to find out what sort of world I live in is to look at the world. It looks simple.

Comment author: Benja 14 February 2014 10:24:16PM 1 point [-]

To summarize that part of the post: (1) The view I'm discussing there argues that the reason we find ourselves in a simple-looking world is that all possible experiences are consciously experienced, including the ones where the world looks simple, and we just happen to experience the latter. (2) If this is correct, then you cannot use the fact that you look around and see a simple-looking world to infer that you live in a simple-looking world, because there are plenty of complex interventionistic worlds that look deceptively simple. In fact, the prior probability that the particular world you see is actually simple is extremely low. (3) However, if you value the things that happen in actually simple worlds more than the things that happen in complex worlds, then it's still correct to act as if your simple-looking world is in fact simple, despite the fact that prior probability says this is possibly wrong (or to put this differently, even though most of the equally-existing mathematically possible humans reasoning like this will be wrong).

Comment author: Coscott 14 February 2014 09:37:16PM *  0 points [-]

I feel like my first reaction was like you, that I do not care about simplicity THAT much, but reflicting on it has made me think that maybe I really do care about simplicity that much.

FIrst, let's remodel the problem. There is a collection of universes, each one an infinite string of bits, which encodes a turing machine. Lets say the actual states of the TM are encoded on some finite subset of the infinite string, and the rest of the string is random bits that The TM can read if it chooses to.

The first obstacle to out intuition is that caring based on K-complexity is the same as caring about all of these different TMs equally, so what seems unfair in one model seems very fair in another model. This might be enough to convince you to care THAT much, but I imagine you have the following rebuttal:

Many of these simple TMs never even read the infinite string of random bits at the end. They are all exactly the same. I have this vague feeling of diminishing returns. A million of the same good thing and a million different bad things does not feel as good as a million of the same bad thing and a million different good things.

I feel this intuition myself, but maybe this is just a fallacy of projecting intuitions about diminishing returns from within one universe to questions about multiple universes that do not communicate with each other.

Comment author: Benja 14 February 2014 09:57:35PM 0 points [-]

I don't feel like considering these different ways to approach K-complexity addresses the point I was trying to make. The rebuttal seems to be arguing that we should weigh the TMs that don't read the end of the tape equally, rather than weighing TMs more that read less of the tape. But my point isn't that I don't want to weigh complex TMs as much as simple TMs; it is (1) that I seem to be willing to consider TMs with one obviously disorderly event "pretty simple", even though I think they have high K-complexity; and (2) given this, the utility I lose by only disregarding the possibility of magical reality fluid in worlds where I've seen a single obviously disorderly event doesn't seem to lose me all that much utility if measureless Tegmark IV is true, compared to the utility I may lose if there actually is magical reality fluid or something like that and I ignore this possibility and, because of this, act in a way that is very bad.

(If there aren't any important ways in which I'd act differently if measureless Tegmark IV is false, then this argument has no pull, but I think there may be; for example, if the ultrafinitist hypothesis from the end of my post were correct, that might make a difference to FAI theory.)

Comment author: Coscott 14 February 2014 08:50:58PM *  1 point [-]

I might care more about the complex universes than I would if one universe existed randomly selected from a probability measure. However, I attribute this to risk aversion. I think that I care similarly in my model to the way I would care with "reality fluid," but I care differently from the way I would care if there was one randomly selected universe with probability measure as from the "reality fluid."

Comment author: Benja 14 February 2014 09:24:21PM 0 points [-]

So, I can see that you would care similarly as you would in a multiverse with magical reality fluid that's distributed in the same proportions as your measure of caring, and if your measure of caring is K-complexity with respect to a universal Turing machine (UTM) we would consider simple, it's at least one plausible possibility that the true magical reality fluid that's distributed in roughly those proportions. But given the state of our confusion, I think that conditional on there being a true measure, any single hypothesis as to how that measure is distributed should have significantly less than 50% probability, so "Conditional on there being a true measure, I would act the same way as according to my K-complexity based preferences" sounds wrong to me. (One particularly salient other possibility is that we could have magical reality fluid due to Tegmark I -- infinite space -- and Tegmark III -- many-worlds -- but not due to all mathematically possible universes existing, in which case we surely wouldn't get weightings that are close to K-complexity with a simple UTM. I mean, this is a case of one single universe, but with all possible experiences existing, to different degrees.)

Comment author: ThisSpaceAvailable 08 February 2014 07:39:23AM 3 points [-]

But you see Eliezer's comments because a conscious copy of Eliezer has been run. If I'm figuring out what output a program "would" give "if" it were run, in what sense am I not running it? Suppose I have a program MaybeZombie, and I run a Turing Test with it as the Testee and you as the Tester. Every time you send a question to MaybeZombie, I figure out what MaybeZombie would say if it were run, and send that response back to you. Can I get MaybeZombie to pass a Turing Test, without ever running it?

Comment author: Benja 10 February 2014 11:05:04PM 2 points [-]

But you see Eliezer's comments because a conscious copy of Eliezer has been run.

A conscious copy of Eliezer that thought about what Eliezer would do when faced with that situation, not a conscious copy of Eliezer actually faced with that situation -- the latter Eliezer is still an l-zombie, if we live in a world with l-zombies.

Comment author: trist 07 February 2014 09:19:32PM *  10 points [-]

Are cryopreserved humans l-zombies?

keeping in mind that if they were an l-zombie, they would still say "I have conscious experiences, so clearly I can't be an l-zombie"?

As well they should. For l-zombies to do anything they need to be run, whereupon they stop being l-zombies.

Comment author: Benja 07 February 2014 10:30:15PM 2 points [-]

For l-zombies to do anything they need to be run, whereupon they stop being l-zombies.

Omega doesn't necessarily need to run a conscious copy of Eliezer to be pretty sure that Eliezer would pay up in the counterfactual mugging; it could use other information about Eliezer, like Eliezer's comments on LW, the way that I just did. It should be possible to achieve pretty high confidence that way about what Eliezer-being-asked-about-a-counterfactual-mugging would do, even if that version of Eliezer should happen to be an l-zombie.

Comment author: ESRogs 07 February 2014 09:54:49PM 4 points [-]

Actually, there probably aren't any p-zombies

Should that (from the first line of the third paragraph) be l-zombies?

Comment author: Benja 07 February 2014 10:24:27PM 1 point [-]

Fixed, thanks!

Comment author: CronoDAS 07 February 2014 08:27:11PM 2 points [-]

So, an L-zombie is a person that could exist, but doesn't?

Comment author: Benja 07 February 2014 09:06:14PM 1 point [-]

(Agree with Coscott's comment.)

Comment author: TruePath 30 January 2014 04:44:46AM 0 points [-]

I meant useful in the context of AI since any such sequence would obviously have to be non-computable and thus not something the AI (or person) could make pragmatic use of.

Also, it is far from clear that T0 is the union of all theories (and this is the problem in the proof in the other rightup). It may well be that there is a sequence of theories like this all true in the standard model of arithmetic but that their construction requires that Tn add extra statements beyond the schema for the proof predicate in T_{n+1}

Also, the claim that Tn must be stronger than T{n+1} (prove a superset of it...to be computable we can't take all these theories to be complete) is far from obvious if you don't require that Tn be true in the standard model. If Tn is true in the standard model than, as it proves that Pf(Tn+1, \phi) -> \phi this is true so if T{n+1} |- \phi then (as this witnessed in a finite proof) there is a proof that this holds from T_n and thus a proof of \phi. However, without this assumption I don't even see how to prove the containment claim.

Comment author: Benja 30 January 2014 12:13:12PM *  0 points [-]

I meant useful in the context of AI since any such sequence would obviously have to be non-computable and thus not something the AI (or person) could make pragmatic use of.

I was replying to this:

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

I.e., I was talking about computable sequences of computable theories, not about non-computable ones.

Also, it is far from clear that T_0 is the union of all theories (and this is the problem in the proof in the other rightup). It may well be that there is a sequence of theories like this all true in the standard model of arithmetic but that their construction requires that T_n add extra statements beyond the schema for the proof predicate in T_{n+1}

I can't make sense of this. Of course T_n can contain statements other than those in T_{n+1} and the Löb schema of T_{n+1}, but this is no problem for the proof that T_0 is the union of all the theories; the point is that because of the Löb schema, we have T_{n+1} \subset T_n for all n, and therefore (by transitivity of the subset operation) T_n \subseteq T_0 for all n.

Also, the claim that T_n must be stronger than T_{n+1} (prove a superset of it...to be computable we can't take all these theories to be complete) is far from obvious if you don't require that T_n be true in the standard model. If T_n is true in the standard model than, as it proves that Pf(T_n+1, \phi) -> \phi this is true so if T_{n+1} |- \phi then (as this witnessed in a finite proof) there is a proof that this holds from T_n and thus a proof of \phi. However, without this assumption I don't even see how to prove the containment claim.

Note again that I was talking about computable sequences T_n. If T_{n+1} |- \phi and T_{n+1} is computable, then PA |- Pf(T_{n+1}, \phi) and therefore T_n |- Pf(T_{n+1}, \phi) if T_n extends PA. This doesn't require either T_n or T_{n+1} to be sound.

Comment author: TruePath 29 January 2014 03:56:05PM 2 points [-]

Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

Comment author: Benja 29 January 2014 04:58:42PM *  0 points [-]

Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.

("This paper" being Eliezer's writeup of the procrastination paradox.) That's true, thanks.

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

First of all (always assuming the theories are at least as strong as PA), note that in any such sequence, T_0 is the union of all the theories in the sequence; if T_(n+1) |- phi, then PA |- Box_(T_(n+1)) "phi", so T_n |- Box_(T_(n+1)) "phi", so by the trust schema, T_n |- phi; going up the chain like this, T_0 |- phi. So T_0 is in fact the "collapse" of the sequence into a single theory.

That said, I disagree that there is no hope that this kind of sequence could be useful. (I don't literally want to use an unsound theory, but see my writeup about an infinite sequence of sound theories each proving the next consistent, linked from the main post; the same remarks apply there.) Yes, T_0 is stronger than T_1, so why would you ever want to use T_1? Well, T_0 + Con(T_0) is stronger than T_0, so why would you ever want to use T_0? But by this argument, you can't use any sound theory including PA, so this doesn't seem like a remotely reasonable argument against using T_1. Moreover, the fact that an agent using T_0 can construct an agent using T_1, but it can't construct an agent using T_0, seems like a sufficient argument against the claim that the sequence as a whole must be useless because you could always use T_0 for everything.

View more: Prev | Next