Permutation City is an awesome novel that was written in 1994.  Even if the author, Greg Egan, used a caricature of this community as a bad guy in a more recent novel, his work is still a major influence on a lot of people around these parts who have read it.  It dissolves so many questions around uploading and simulation that it's hard for someone who has read the book to talk about simulationist metaphysics without wanting to reference the novel... but doing that runs into constraints imposed by spoiler etiquette.

So go read Permutation City if you haven't read it already because it's philosophically important and a reasonably fun read.

In the meantime, if you haven't then you should also read A Fire Upon The Deep by Vernor Vinge (of "singularity" coining fame) and then read Eliezer's fan fic The Finale of the Ultimate Meta Mega Crossover which references both of them in interesting ways to make substantive philosophical points and doesn't take too long to read.

In the comments below there will be discussion that has spoilers for all three works.

New to LessWrong?

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 3:14 AM

At the risk of being shunned as something of a heretic here, I have to admit to not having cared too much for Permutation City. It had some lovely ideas, but its characters seemed too constrained by the exigencies of plot and setting, and never quite came alive for me.

I loved Fire Upon the Deep, though.

Also Deepness in the Sky, though it's not particularly about uploading, although it is a good visceral introduction to just how much benefit even a marginal increase in intelligence can provide. It is also helpful to read if you're going to get some of the crossover references in FUtD.

I felt the same way. I feel the same way about a lot of science fiction - interesting ideas, often worth reading for the ideas alone, but falls flat on plot, or characters, or writing, or all of the above.

With Permutation City I got the sense that he was trying hard to make his characters 3-dimensional, but it didn't work for me. [SPOILER WARNING] For example, one supporting character spent most of the novel trying to overcome the guilt of murdering a prostitute. The idea is promising, but the execution was irritating.

(In fact, I have a theory that some popular works of genre fiction - I would include thrillers and romances as well as sci-fi - are popular because of their flaws. For example, when reading The Da Vinci Code, you don't have to worry about any interesting characters or beautiful prose distracting you from the puzzles and conspiracies.)

I so rarely encounter good characters even in non-sci-fi that the book better be based on a damn interesting premise or it will be a total waste.

For example, one supporting character spent most of the novel

Yeah, I felt like that character and all his scenes could have been cut entirely without damaging the book.

Yeah. Other than Inoshiro from Diaspora, Egan's stories have stayed with me because of the philosophy and scientific imagination, not because I felt strongly about the characters.

What did you make of the Dust Hypothesis? It appeared to offer a vivid demonstration of the most extreme possible form of the substrate independence thesis you can have, while still having actual substrate that a person can point to and say "This implements or records a simulation of X"... but it smelled fishy to me for roughly the same reasons that I reject Searle's Chinese Room arguments and I'm curious if I was alone in this.

The idea that a consciousness can exist within an alternate reading frame on a system that is not conscious in my own frame, as Peer exists within the City, does significant violence to my intuitions about consciousness.

The idea that the alternate frame can be temporally discontiguous with my own... that is, that events A and B can occur in both frames but in a different order... does additional violence to my intuitions about time.

That said, I have no reason to expect my intuitions about consciousness or time to reflect the way the universe actually is. (Of course, that doesn't mean any particular contradictory theory is right.)

That said, without the possibility of intentional causal interaction with such alternate-frame consciousnesses, I'm indifferent to them: I can't see any reason why I should care whether it's true. I feel more or less the same way as I do about the possibility of epiphenomenal spirits, or epiphenomenal Everett branches: if they are in principle unable to interact causally with me, if no observation I can ever make will go differently based on their existence or nonexistence, then I simply don't care whether they exist or not.

I don't endorse that apathy, though. It mostly comes out of a motivational psychology in which believing that future events are significantly influenced by my actions is important to motivating those actions, and I don't especially endorse that sort of psychology, despite instantiating one.

I don't see the connection to Searle's CR.

The initial few 'thought experiments' in Permutation City cheat the same way the Chinese Room does. A program capable of simulating "Durham having just finished counting to 7 and about to say 8" must have, in some way, already simulated Durham counting to 7. Similarly, Searle's Giant Lookup Table must have come into being somehow.

You could make a similar case that choosing the right permutation of dust to create a universe requires complete knowledge of that universe. In this case, that knowledge is coming from the author.

Ah, I see. Sure, agreed.

Though I guess a lot depends on whether the computation* of Durham counting to 8 requires the computation of Durham being aware of having counted to 7. If it doesn't, then the program can produce the following sequence: x7 = Durham->countTo(7); x8 = Durham->countTo(8); Durham->awareOf(x8); Durham->awareOf(x7); with the result that Durham goes "8, 7" without any cheating.

The question of whether Durham experiences "7,8" or "8,7" is less clear, though.

  • I'm more comfortable talking about computation rather than simulation here, because I'm not at all convinced that there's any difference between a real counting-to-7 and a simulated counting-to-7. I don't think the distinction actually matters in this context though.

With thanks to HonoreDB, yes, the structure must have a source. And also, as with the Chinese Room, there is a sleight-of-concept going on where something that looks like a human (Searle's paper manipulator and Egan's Durham) is not the actual "brains" of the system (which are really the symbol manipulation rules with Searle, or the dust/translator combination with Egan) that we're truly analyzing.

I agree with you that if there is not stateful process to worry about, but merely the instantiation of a trivially predictable "movie-like image of counting the number 8" then the dust hypothesis might make sense... but I suspect that very few of the phenomena that we care about are like this, nor do I think that such phenomena are going to be interesting to us post-uploading. I can't fully and succinctly explain the intuition I have here, but the core of the objection it is connected to reversible computing, computational irreducibility, and their relation to entropy and hence the expenditure of energy.

From these inspirations, it seems likely to me that "the dust" can only be said to contain structure that I care about if the energy used to identify/extract/observe that structure is less than what would have been required for an optimally efficient computational process to invent that structure from scratch. Thus, there is probably a mechanically rigorous way to distinguish between hearing a sound versus imagining that same sound, that grows out of the way that hearing requires fewer joules than imagining. If a dust interpretation system requires too much energy, I would guess either than it is mediating a scientifically astonishing real signal (in a grossly inefficent way)... or you're dealing with a sort of clever hans effect where the interpretation system plus its battery is the real source of the "detected patterns", not the dust.

Using this vocabulary to speak directly to the issues raised in the article on strong substrate independence, the problem with other quantum narratives (or the bits of platospace mathematicians spend their time "exploring") is that the laws of physical computation seem such that our brains can never hear anything from those "places", our brains can only imagine them.

Yes, that seems like a reasonable way to state more rigorously the distinction between systems I might care about and systems I categorically don't care about.

Though, thinking about Permutation City a bit more... we, as readers of the novel, have access to the frame in which Peer's consciousness manifests. The residents of PC don't have access to it; Peer is no easier for them to access than the infinite number of other consciousnesses they could in principle "detect" within their architecture.

So we care about Peer, and they don't, and neither of us cares about the infinite number of Peer's peers. Makes sense.

But there is a difference: their history includes the programming exploit that created the space in which Peer exists, and the events that led to Peer existing within it. One can imagine a resident of PC finding those design notes and building a gadget based on them to encounter Peer, and this would not require implausible amounts of either energy or luck.

And I guess the existence of those design notes would make me care more about Peer than about his peers, were I a resident of PC... which is exactly what I'd predict from this theory.

OK, then.

Searle's Giant Lookup Table must have come into being somehow.

That's not due to Searle - you're talking about Ned Block's "Blockhead".

Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.

Well, the man in the Chinese Room is supposed to be manually 'stepping through' an algorithm that can respond intelligently to questions in Chinese. He's not necessarily just "matching up" inputs with outputs, although Searle wants you to think that he may as well just be doing that.

Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.

Anyway, the "Systems Reply" is correct (hurrah - we have a philosophical "result"). Even those philosophers who think this is in some way controversial ought to agree that it's irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.

Therefore, his thought experiment has zero value - if you can imagine a conscious machine then you can imagine the "Systems Reply" being correct, and if you can't, you can't.

searle is an idiot, the nebulous "understanding" he talks about in the original paper is obviously informationally contained in the algorithm. the degree to which someone believes that "understanding" can't be contained in an algorithm is the degree to which they believe in dualism. just because executing an algorithm from the inside feels like something we label understanding doesn't make it magic.

The idea that a consciousness can exist within an alternate reading frame on a system that is not conscious in my own frame, as Peer exists within the City, does significant violence to my intuitions about consciousness.

How about, instead of an opaquely described "alternate reading frame", we consider homomorphic encryption. Take some uploads in a closed environment, homomorphically encrypt the whole thing, throw away the decryption key, and then start it running. I think this matches Peer's situation in all relevant aspects: The information about the uploads exists in the ordinary computational basis (not talking Dust Theory here), and there is a short and fast program to extract it, but it's computationally intractable to find that program if you don't know the secret. The difference is that this way it's much more obvious what that secret would look like.

Yeah, I basically agree, and that does less violence to my intuitions on the subject... still more evidence, were it needed, that my intuitions on the subject are unreliable.

Indeed, I'm not even sure how relevant the computational intractability of breaking the encryption is. That is, I'm not actually sure how Peer's situation is relevantly different from my own with respect to someone sitting in another building somewhere... what matters about both of them is simply that we aren't interacting with one another in any important way.

The degree to which the counterfactual story about how we might interact with one another seems plausible is relevant to my intuitions about those consciousnesses, as you say, but it doesn't seem at all relevant to anything outside of those intuitions.

If by simulationism we mean the belief that the simulation of an entity causes an instance of the entity to exist, then that is dualism. We can all agree that simulations can be carried out in media which, physically, are wildly different. Simulationism then tells us that the real entity, the one being simulated, comes to identically inhabit all those physically different simulators. And then, in Egan's novel, we have only a partial simulation: the model of Permutation City is run for just a few ticks of the clock and then turned off, but it is assumed that it will continue to exist - platonically? - because of its internal logic.

I see people trying to resolve the illogicality of this - the whole Permutation-City universe gets to exist, even though only a few moments of it get simulated - by appealing to the sort of existence that mathematical entities have. The idea is that the being of mathematical entities doesn't depend on particular instances of people talking about them or computers calculating them, they just exist independently of all that; and the same thing goes for possible worlds. But in that case, you should abandon simulationism - where, to repeat, simulationism is being defined as the belief that the act of simulation causes the simulated entity to exist locally. In this second approach to the problem inspired by mathematical platonism, the possible worlds don't owe their actuality to the fact that they get simulated somewhere, they all exist platonically and independently. So why go on thinking that the simulation of Permutation City involves Permutation City actually existing, however briefly, in the universe where the simulation is occurring?

But all that is just one symptom of the same overall situation of self-concealed ignorance, which leads so many scientifically educated people to not see the problems of "consciousness", and to not understand where this whole "qualia debate" comes from. I may literally have said it a hundred times by now: The standard contemporary scientific way of looking at consciousness is dualistic. It's not a dualism of substance, where you have ordinary matter, and then a soul as well; it's dualism of properties. You have the physical properties, the properties that are actually present in our physical theories, and then you have everything that actually makes up experience - the flow of time, the sense of self, the basic perceived qualities of the world like color. And to see the world in terms of the science that we have right now is to just combine in your imagination the stream of experiences that you actually have, with an imagined play of atoms in space, or fluctuating quantum fields, or whatever avantgarde scientific metaphysics captures your fancy.

A switch to "mathematical" or computational platonism also does absolutely nothing to reinstate the excluded qualities of the experienced world into the official scientific ontology. If, having mulled it over, you were to decide that reality is really a set of equivalence classes of universal Turing machines, when it comes to interpreting your actual experience, you will again have to become a dualist. Only now, instead of imagining that your sensations and thoughts correspond with the flow of ions through membranes - that is, pairing in your imagination your sensations and thoughts as directly but subjectively perceived, with imagined microscopic biophysical processes - you will be imagining that they correspond to abstract state transitions in an abstract state machine. Either way, the disjunction between what is imagined to be the fundamental character of reality and what is experienced to be the character of reality, at least locally, by you - either way the disjunction remains and remains unaddressed.

Simulationist philosophy is the same exercise applied to computers, though from the other direction. Instead of starting with conscious experience and trying to identify it with a physical process, one starts with physical processes and tries to identify them or associate them with the existence of simulated entities or a simulated world.

I sometimes feel like I'm being cruel in pointing all this out, because the right answers are not known, and they are not going to be figured out by people just thinking casually about the problem. I can't tell you the big truth about reality, but I can tell you the little truth about your situation, which is that all the available maps are wrong. Quantum mechanics is the same story; I don't know the explanation of QM, but I can say that MWI is false because relativity is true, and MWI requires objective simultaneity. The ontological truth behind quantum mechanics will only be figured out by people who have extensive technical acquaintance with the subject, and most likely it will require extensive immersion in the most advanced physical theories, because that's how physics is: everything interlocks, and the deep answers are found at the highest levels. I would say the same thing about consciousness, and incidentally about the relationships between computation, consciousness, matter, and reality. The right answer is not yet known, not at all, and it will be found only by sustained and dedicated attention to fact, including "subjective facts" about the world as it is actually experienced, and not just the world as it is imagined by people who have mathematical and formalistic skills.

Interesting. Easily enough ideas here for a top-level post (certainly for the discussion area.)

I don't know the explanation of QM, but I can say that MWI is false because relativity is true, and MWI requires objective simultaneity.

Not really. I suspect that what you're referring to as "MWI" contains the idea that, in addition to a wavefunction evolving unitarily under the Schrödinger equation, there are also ontologically primitive "branches" (or "worlds") which "split". I think this is obviously wrong. (However, note that the SEP article only says that it's "unclear" how to formulate it in such a way as to be compatible with SR). "Branches" are just patterns that emerge when you zoom out to the macro-scale, in much the same way as fluids with thermodynamic attributes such as temperature and entropy only make sense at the macro-scale. In fact, there's a close connection here - the fact that branches "split" but do not "merge" and the second law of thermodynamics are two manifestations of a single underlying principle.

So why go on thinking that the simulation of Permutation City involves Permutation City actually existing, however briefly, in the universe where the simulation is occurring?

I would interpret the statement "Permutation City actually exists in universe U, which is simulating it" along the following lines: "There is a system in U whose components are causally related to one another in such a way as to be isomorphic to the primitive constituents of Permutation City and their causal relations." (Yeah yeah, at some point I might be called on to explain what I mean by "causal relations" and "primitive constituents", and these are thorny questions, but let's save them for another day.)

So for me "Permutation City actually exists in universe U, which is simulating it" means no more and no less than "Universe U is simulating Permutation City." Or perhaps clearer: once it's established that U is simulating V, there's nothing more to be said about whether V exists in U.

Of course, you won't be happy with this - you want to say (a) that there's either something it's like or nothing it's like to be a simulated human and (b) that actually there's nothing it's like - simulated people are "zombies".

I may as well give the Standard Reply from my camp, though you've heard it all before: "If 'something it's like' is interpreted in the informal everyday sense where 'access consciousness' and 'phenomenal consciousness' are not conceived of as separate, then yes absolutely there's something it's like. Moreover, to the extent that the question carries ethical 'weight', again the answer must be yes. But when you try to do fractional distillation, separating out the pure P-consciousness, and ask whether simulated people are P-conscious, then the question loses all of its meaning."