PaulUK
PaulUK has not written any posts yet.

"As long as the simulations are identical and interact identically (from the simulation's point of view) with the external world, I don't think the above question is meaningful. A mind doesn't have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn't split between them."
I dealt with this objection in the second article of the series. It would be easy to say that there are two simulations, in which slightly different things are going to happen. For example, we could have one simulation in which you are going to see a red ball when you... (read more)
"Less measure" is only meant to be of significance statistically, not subjectively. For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. In other words, I am agreeing (if you are arguing for this) that there should be no subjective difference for the mind in the extreme situation. I just think we should think that that situation corresponds to "less" observers in some way.
My own argument is actually a justification of something a bit like the dust hypothesis in "Permutation City". However, there are some significant... (read more)
I think Max Tegmark made an argument for that - and I find it more convincing.
But I wasn't trying to argue that "low level" does mean "close to the machine". That, however is a way it is often expressed. I merely listed that as one idea of "low level". If I had not mentioned that in the article someone would have simply said "A low level language is close to the machine" and thought that dealt with it, so I had to deal with it out of completeness. I was not saying that "low level" as "close to the machine" was a formal, official idea - and I actually argued that it isn't. I was after a language which is, as far as possible, free from prejudice... (read more)
I think it is possible and inevitable (though I am unsure of timescale). I think it has some risks (I think these are understated by some people who make false analogies between simple systems and ones with minds which may be designed to adapt to an environment and which may be given simple goals and make more sophisticated goals to satisfy these, which humans may not even have specified) and would need extreme caution, but I don't think these risks are avoided in any way if only dangerous people are left to do it. I would also say - please don't view me as authoritative in any way on this.
This is an approach I considered back in 1990something actually, and at the time I actually considered it correct. I get the idea. We say that the "finding algorithm" somehow detracts from what is running. The problem is, this does not leave a clearly defined algorithm as the one being found. if X is found by F, you might say that all that runs is a "partial version of X" and that X only exists when found by F. This, however, would not just apply to deeply hidden algorithms. I could equally well apply it your brain. I would have to run some sort of algorithm, F, on your brain to work... (read 671 more words →)
Either name is fine (since it is hardly a secret who I am here).
Yes, I see the problem, but this was very much in my mind when I wrote all this. I could have hardly missed the issue. I would have to accept it or deny it, and in fact I considered it a great deal. It is the first thing you would need to consider. I still maintain that there is nothing special about this algorithm length. I actually think your practical example of buying the computer, if anything counts against it. Suppose you sold me a computer and it "allegedly" ran a program 10^21 bits long, but I had to... (read more)
I am just someone who has an interest in these issues Carl, and they are all written in a private capacity: I am not, for example, anyone who works at a university. I have worked as a programmer, and as a teacher of computing, in the past.I think machineslikeus describes me as an "independent researcher" or something like that... which means, I suppose, that I write articles.
I hope it is okay for me to reply to all these. Right, yes, that is my position steven. When the interpreter algorithm length hits the length of the algorithm it is finding, nothing of any import happened. Would we seriously say, for example, that a mind corresponding to a 10^21 bit computer program would be fine, any enjoying a conscious existence, if it was "findable" by a 10^21 bit program, but would suddenly cease to exist if it was findable by only a 10^21+1 bit program? I would say, no. However, I can understand that that is always how people see it. For some reason, the point at which one algorithmic length exceeds the other is the point at which people think things are going too far.
"Are the subsequent experiences of the copies "mine" relative to this self? If so, then it is certain that "I" will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here."
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don't know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical - in the digital sense) red ball simulations and one blue ball simulation.... (read 863 more words →)