NancyLebovitz comments on Open Thread June 2010, Part 2 - Less Wrong

7 Post author: komponisto 07 June 2010 08:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (534)

You are viewing a single comment's thread.

Comment author: NancyLebovitz 07 June 2010 12:44:09PM 8 points [-]
Comment author: Houshalter 10 June 2010 02:00:52AM *  6 points [-]

I just realised that infinite processing power creates a weird moral dilema:

Suppose you take this machine and put in a program which simulates every possible program it could ever run. Of course it only takes a second to run the whole program. In that second, you created every possible world that could ever exist, every possible version of yourself. This includes versions that are being tortured, abused, and put through horrible unethical situations. You have created an infinite number of holocausts and genocides and things much, much worse then what you could ever immagine. Most people would consider a program like this unethical to run. But what if the computer wasn't really a computer, it was an infinitely large database that contained every possible input and a corresponding output. When you put the program in, it just finds the right output and gives it to you, which is essentially a copy of the database itself. Since there isn't actually any computational process here, there is no unethical things being simulated. Its no more evil than a book in the library about genocide. And this does apply to the real world. It's essentially the chineese room problem - does a simulated brain "understand" anything? Does it have "rights"? Does how the information was processed make a difference? I would like to know what people at LW think about this.

Comment author: Nick_Tarleton 10 June 2010 07:17:05AM 4 points [-]
Comment author: toto 10 June 2010 03:18:31PM *  1 point [-]

I have problems with the "Giant look-up table" post.

"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."

If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.

Note that "creation of beliefs" (including about beliefs) is just a special case of memory. It's all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn't have this ability, it can't emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.

So I don't see how the non-consciousness of the GLUT is established by this argument.

But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...) In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.

Comment author: Houshalter 10 June 2010 04:56:28PM 0 points [-]

If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.

Memmory is input to. The "GLUT" is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.

This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese - even if he memorizes the entire process and does it in his head. So how could the computer "understand"?

Comment author: JoshuaZ 07 June 2010 01:41:25PM *  6 points [-]

That's well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor's algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can't lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.

Second, if our understanding of quantum mechanics is correct, there's a fundamentally random aspect to the laws of physics. Thus, we can't simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.

Even if everything in the story was correct, I'm not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there's no reason you couldn't have a wandering sequence of worlds. Edit: Or for that matter, couldn't have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.

Comment author: ocr-fork 07 June 2010 04:16:09PM *  4 points [-]

First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct

It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...

Comment author: JoshuaZ 07 June 2010 04:23:35PM 4 points [-]

Ok, but in that case, that world in question almost certainly can't be our world. We'd have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn't our universe.

Comment author: ocr-fork 07 June 2010 04:49:52PM 4 points [-]

Of course. It's fiction.

Comment author: JoshuaZ 07 June 2010 04:59:23PM *  3 points [-]

What I mean is that this isn't a type of fiction that could plausibly occur in our universe. In contrast for example, there's nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn't work in our universe.

Comment author: Blueberry 07 June 2010 05:06:26PM 1 point [-]

Well, it does suggest they've made recent discoveries that changed the way they understood the laws of physics, which could happen in our world.

Comment author: jimrandomh 07 June 2010 10:04:42PM *  3 points [-]

The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don't prevent using the trick described in the story, they just make it more difficult. You don't have to identify one unique universe that you're in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:

Write a function to detect a particular arrangement of atoms with very high information content - enough that it probably doesn't appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.

The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.

Comment author: Houshalter 07 June 2010 07:29:24PM 2 points [-]

Couldn't they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.

Comment author: ocr-fork 08 June 2010 12:48:16AM 2 points [-]

Then they miss their chance to control reality. They could make a shield out of black cubes.

Comment author: Baughn 08 June 2010 12:20:43PM 0 points [-]

They could program in an indestructible control console, with appropriate safeguards, then run the program to its conclusion. Much safer.

That's probably weeks of work, though, and they've only had one day so far. Hum, I do hope they have a good UPS.

Comment author: Houshalter 08 June 2010 01:24:03AM *  0 points [-]

Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it's end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.

Comment author: ocr-fork 08 June 2010 01:32:09AM 1 point [-]

Then it would be someone else's reality, not theirs. They can't be inside two simulations at once.

Comment author: cousin_it 08 June 2010 04:53:05PM *  1 point [-]

But what if two groups had built such computers independently? The story is making less and less sense to me.

Comment author: ocr-fork 08 June 2010 06:25:05PM *  2 points [-]

Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.

558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it's conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.

So that's why restarting the simulation shouldn't work.

But what if two groups had built such computers independently? The story is making less and less sense to me.

Then instead of a stack, you have a binary tree.

Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.... The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.

You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.

Comment author: cousin_it 08 June 2010 07:50:03PM *  1 point [-]

Yeah, but would a binary tree of simulated worlds "converge" as we go deeper and deeper? In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it'll do?

Comment author: ocr-fork 08 June 2010 08:32:24PM 1 point [-]

In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.

I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.

Comment author: Blueberry 07 June 2010 07:45:53PM 1 point [-]

They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn't notice any change if they turned it off.

Comment author: Nisan 08 June 2010 04:58:16PM 0 points [-]

We can't be sure that there is a top layer. Maybe there are infinitely many simulations in both directions.

Comment author: JoshuaZ 07 June 2010 07:52:32PM 0 points [-]

That doesn't work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.

Comment author: Blueberry 07 June 2010 08:42:01PM 1 point [-]

I'm not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.

Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.

Comment author: ocr-fork 08 June 2010 12:32:58AM 1 point [-]

Why do you think deterministic worlds can only spawn simulations of themselves?

Comment author: Blueberry 08 June 2010 03:30:16AM 0 points [-]

A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.

Comment author: ocr-fork 08 June 2010 02:57:59PM 0 points [-]

That doesn't say anything about the top layer.

Comment author: Blueberry 08 June 2010 03:35:09PM 1 point [-]

I don't understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.

Comment author: ocr-fork 08 June 2010 04:14:33PM 1 point [-]

Until they turned it on, they thought it was the only layer.

Comment author: red75 08 June 2010 09:33:09AM *  0 points [-]

I can't see any point in turning it off. Run it to the end and you will live, turn it off and "current you" will cease to exist. What can justify turning it off?

EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.

Comment author: Blueberry 08 June 2010 03:35:56PM 0 points [-]

If current you is identical with top-layer you, you won't cease to exist by turning it off, you'll just "become" top-layer you.

Comment author: NancyLebovitz 08 June 2010 08:56:11AM 0 points [-]

It's surprising that they aren't also experimenting with alternate universes, but that would be a different (and probably much longer) story.

Comment author: JoshuaZ 07 June 2010 09:03:49PM 0 points [-]

That's a good point. Everyone but the top layer will be identical and the top layer will then only diverge by a few seconds.

Comment author: Houshalter 07 June 2010 07:56:45PM 0 points [-]

But they would cease to exist. If they ran it to its end, then it's over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there's no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there's no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.