For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. [...] I just think we should think that that situation corresponds to "less" observers in some way.
This seems tautological to me. Your measure needs to be defined relative to a given set of observers.
I think we should consider this in terms of measure because there are "more ways to find you" in some situations than in others.
More ways for who to find you?
If you want to see why I think measure is important, this first article may help
Very interesting piece. I'll be thinking about the Mars colony scenario for a while. I do have a couple of immediate responses.
How likely is it that you are in Computer A, B or C?
As long as the simulations are identical and interact identically (from the simulation's point of view) with the external world, I don't think the above question is meaningful. A mind doesn't have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn't split between them.
The more redundancy in a particular implementation of a version you, then the more likely it is that that implementation is causing your experiences.
I see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
While this is also a valid and interesting scenario to consider, I don't think it "deals with the objection". The idea that "which computer am I running on?" is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.
I actually don't have a good answer to this, and the flavor of my confusion leads me to suspect the definitions involved. I think the word "you" in this context denotes something of an unnatural category. To consider the question of anticipating different experiences, I have to assume a specific self exists prior to copying. Are the subsequent experiences of the copies "mine" relative to this self? If so, then it is certain that "I" will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.
50/50 makes sense to me only as far it represents a default state of belief about a pair of mutually exclusive possibilities in the absence of any relevant information, but the exclusivity troubles me. I read objection 9, and I'm not bothered by the "strange" conclusion of sensitivity to minor alterations (perhaps this leads to contradictions elsewhere that I haven't perceived?). I agree that counting algorithms is just a dressed-up version of counting machines, because the entire question is predicated on the algorithms being subjectively isomorphic (they're only different in that some underlying physical or virtual machine is behaving differently to encode the same experience).
Of course, this leads to the problem of interpretation, which suggests to me that "information" and "algorithm" may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind. A mind of this sort, subjectively dependent on its own substrate, can be "destroyed" relative to observers of the environment, as they now have evidence for the following reasoning:
So far, this is the only substrate dependence argument I find convincing, but it requires the explicit dependence of M on E, which requires I/O.
"Are the subsequent experiences of the copies "mine" relative to this self? If so, then it is certain that "I" will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here."
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don't know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical - in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Some people say that the probability is 50/50 because copies don't count. I would make these points:
I just think that when we try to go for 50/50 (copies don't count) we can get into a huge mess that a lot of people can miss. While I don't think you agree with me, I think maybe you can see this mess.
"While this is also a valid and interesting scenario to consider, I don't think it "deals with the objection". The idea that "which computer am I running on?" is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion."
I think the suggested scenario makes it meaningful. There is also the issue of turning off some of the machines. If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as "quantum suicide".
We can also consider another situation:
You have a number of computers, all running the same program, and something in the external world is going to affect these computers, for example a visitor from the outside world will "login" and visit you - we could discuss the probability of meeting the visitor while the simulations are all identical.
"This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind."
I don't know if I fully understood that - are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
As you asked me about Permutation City (Greg Egan's novel) before, I will elaborate on that a bit.
The "dust hypothesis" in Permutation City was the idea that all the bits of reality could be stuck together in different ways, to get different universe. The idea here is that every interpretation of an object, or part of an object, that can be made, in principle, by an interpretative algorithm, exists as an object in its own right. This argument applies it to minds, but I would clearly have to claim it applies to everything to avoid being some kind of weird dualist. It is therefore a somewhat more general view. Egan's cosmology requires a universe to exist to get scrambled up in different ways. With a view like this, you don't need to assume anything exists. While a lot of people would find this counter-intuitive, if you accept that interpretations that produce objects produce real objects, there is nothing stopping you producing an object by interpreting very little data, or no data at all. In this kind of view, even if you had nothing except logic, interpretation algorithms that could be applied in principle with no input - on nothing at all - would still describe objects, which this kind of cosmology would say would have to exist as abstractions of nothing. Further objects would exist that would be abstractions of these. In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
"Of course, this leads to the problem of interpretation, which suggests to me that "information" and "algorithm" may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function."
and I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency - and the use of measure even makes things statistically predictable.