All of PaulUK's Comments + Replies

PaulUK40

"Are the subsequent experiences of the copies "mine" relative to this self? If so, then it is certain that "I" will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here."

No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don't know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (i... (read more)

0loqi
Ah, this does more precisely address the issue. However, I don't think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don't identify exclusively with either substrate and still anticipate a future where "I" experience both possibilities. If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle. Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability. I'm not sure if we're thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I'm willing to be convinced of the validity of a probabilistic resolution. They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment. The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur. No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn't have access to the environment containing its substrate, so I can't see what substrate dependence even means for them. I don't think I followed this. Doesn't any definition of the idea of physical existence mandate a physical reality? I still don't see where you get statistics out of uni
PaulUK10

"As long as the simulations are identical and interact identically (from the simulation's point of view) with the external world, I don't think the above question is meaningful. A mind doesn't have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn't split between them."

I dealt with this objection in the second article of the series. It would be easy to say that there are two simulations, in which slightly different things are going... (read more)

0loqi
While this is also a valid and interesting scenario to consider, I don't think it "deals with the objection". The idea that "which computer am I running on?" is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion. I actually don't have a good answer to this, and the flavor of my confusion leads me to suspect the definitions involved. I think the word "you" in this context denotes something of an unnatural category. To consider the question of anticipating different experiences, I have to assume a specific self exists prior to copying. Are the subsequent experiences of the copies "mine" relative to this self? If so, then it is certain that "I" will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here. 50/50 makes sense to me only as far it represents a default state of belief about a pair of mutually exclusive possibilities in the absence of any relevant information, but the exclusivity troubles me. I read objection 9, and I'm not bothered by the "strange" conclusion of sensitivity to minor alterations (perhaps this leads to contradictions elsewhere that I haven't perceived?). I agree that counting algorithms is just a dressed-up version of counting machines, because the entire question is predicated on the algorithms being subjectively isomorphic (they're only different in that some underlying physical or virtual machine is behaving differently to encode the same experience). Of course, this leads to the problem of interpretation, which suggests to me that "information" and "algorithm" may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind. A mind of this sort, subjectively dependent on its own s
PaulUK20

"Less measure" is only meant to be of significance statistically, not subjectively. For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. In other words, I am agreeing (if you are arguing for this) that there should be no subjective difference for the mind in the extreme situation. I just think we should think that that situation corresponds to "less" observers in some way.

My own argument is actually a justificati... (read more)

2loqi
This seems tautological to me. Your measure needs to be defined relative to a given set of observers. More ways for who to find you? Very interesting piece. I'll be thinking about the Mars colony scenario for a while. I do have a couple of immediate responses. As long as the simulations are identical and interact identically (from the simulation's point of view) with the external world, I don't think the above question is meaningful. A mind doesn't have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn't split between them. I see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?
PaulUK10

I think Max Tegmark made an argument for that - and I find it more convincing.

PaulUK20

But I wasn't trying to argue that "low level" does mean "close to the machine". That, however is a way it is often expressed. I merely listed that as one idea of "low level". If I had not mentioned that in the article someone would have simply said "A low level language is close to the machine" and thought that dealt with it, so I had to deal with it out of completeness. I was not saying that "low level" as "close to the machine" was a formal, official idea - and I actually argued that it isn't. I... (read more)

1[anonymous]
I didn't think, and didn't mean to imply that I thought, you were. I mentioned it for the same reason you did: to help describe my meaning of "low level" by its connection to something related. I don't think that's what you're really after. When you describe what you want, it sounds like a language that is prejudiced against describing things that are complicated in reality, so the complexity of the description matches the complexity of the reality. It's not just a semantic problem that you're calling it "low level." "Low level" means it's far from how humans think, which tends to remove human prejudice. You call it "low level" because you think you can find it by removing prejudice. You actually need to switch from one prejudice to another to get what you want. (Also, thanks for the reply. Sorry I didn't read the whole thing, but I got to the list of methods you had rejected, and it was just too much. It feels a lot longer to someone who thinks the basic idea behind all the methods is off base.)
PaulUK30

I think it is possible and inevitable (though I am unsure of timescale). I think it has some risks (I think these are understated by some people who make false analogies between simple systems and ones with minds which may be designed to adapt to an environment and which may be given simple goals and make more sophisticated goals to satisfy these, which humans may not even have specified) and would need extreme caution, but I don't think these risks are avoided in any way if only dangerous people are left to do it. I would also say - please don't view me as authoritative in any way on this.

PaulUK50

This is an approach I considered back in 1990something actually, and at the time I actually considered it correct. I get the idea. We say that the "finding algorithm" somehow detracts from what is running. The problem is, this does not leave a clearly defined algorithm as the one being found. if X is found by F, you might say that all that runs is a "partial version of X" and that X only exists when found by F. This, however, would not just apply to deeply hidden algorithms. I could equally well apply it your brain. I would have to run ... (read more)

1loqi
I'm running into trouble with the concept of "existence" as it's being applied here. Surely existence of abstract information and processes must be relative to a chosen reference frame? The "possible algorithms" need to be specified relative to a chosen data set and initial condition, like "observable physical properties of Searle's wall given sufficient locality". Clearly an observer outside of our light cone couldn't discern anything about the wall, regardless of algorithm. An encrypted mind "existing less" doesn't seem to carry any subjective consequences for the mind itself. What if a mind encrypts itself but shares the key with a few others? Wouldn't its "existence" depend on whether or not the reference frame has access to the key? If you've read it, I'm curious to know what you think of the "dust hypothesis" from Egan's Permutation City in this context.
PaulUK50

Either name is fine (since it is hardly a secret who I am here).

Yes, I see the problem, but this was very much in my mind when I wrote all this. I could have hardly missed the issue. I would have to accept it or deny it, and in fact I considered it a great deal. It is the first thing you would need to consider. I still maintain that there is nothing special about this algorithm length. I actually think your practical example of buying the computer, if anything counts against it. Suppose you sold me a computer and it "allegedly" ran a program 10^2... (read more)

PaulUK70

I am just someone who has an interest in these issues Carl, and they are all written in a private capacity: I am not, for example, anyone who works at a university. I have worked as a programmer, and as a teacher of computing, in the past.I think machineslikeus describes me as an "independent researcher" or something like that... which means, I suppose, that I write articles.

3CarlShulman
Thanks for enriching the infosphere with some nice work, Paul.
PaulUK10

I hope it is okay for me to reply to all these. Right, yes, that is my position steven. When the interpreter algorithm length hits the length of the algorithm it is finding, nothing of any import happened. Would we seriously say, for example, that a mind corresponding to a 10^21 bit computer program would be fine, any enjoying a conscious existence, if it was "findable" by a 10^21 bit program, but would suddenly cease to exist if it was findable by only a 10^21+1 bit program? I would say, no. However, I can understand that that is always how people see it. For some reason, the point at which one algorithmic length exceeds the other is the point at which people think things are going too far.

0SilasBarta
Thanks for joining the discussion, PaulUK/Paul Almond. (I'll refer to you with the former.) Well, then I'm going to apply Occam's razor back onto this. If you require a 10^21+1 bit program to extract a known 10^21 bit program, we should prefer the explanation: a) "You wrote a program one bit too long." rather than, b) "You found a naturally occurring instance of a 10^21 bit algorithm that just happens to need a 10^21+1 bit algorithm in order to map it to the known 10^21 bit algorithm." See the problem? The whole point of explaining a phenomenon as implementing an algorithm is that, given the phenomenon, we don't need to do the whole algorithm separately. What if I sold you a "computer" with the proviso that "you have to manually check each answer it gives you"?
PaulUK40

As the author of this article, I will reply to this, though it is hard to make much of a reply here, though. (I actually got here our of curiosity when I saw the site logs). I am, however, always pleased to discuss issues like this with people. One issue with this reply is that it is not just randomness we have to worry about. If we are basing a computational interpretation on randomness, yes, we may need to make the computational interpretation progressively more extreme, but Searle's famous WordStar running in a wall example is just one example. We may n... (read more)

4SilasBarta
We would draw the line where our good old friend mutual information comes in. If learning the results of the other phenomenon tells you something about the results of the algorithm you want to run, then there is mutual information, and the phenomenon counts as a (partial) implementation of the algorithm.
PaulUK60

I do not find the article on 3D space terribly convincing either - and I am the author of it - so I would have to be understanding if you don't. It is generally my policy, though, that my articles reflect how I think of things at the time I wrote them and I don't remove them if my views change - though I might occasionally add notes after. I do think that an anthropic explanation still works for this: I just don't think mine was a particularly good one.

0timtyler
It's a difficult topic. Life (e.g. self-replicating CA) exist fine in 2, 3 and 4 dimensions, though there is still the issue of evolving intelligence. Some say that three dimensions is the only number that permits you to tie knots, though the significance of knots is unclear. I am not convinced that 3 is terribly special - and I'm not sure we know enough about physics and biology to coherently address the issue yet.