Comment author: Perplexed 28 August 2010 10:16:48PM 4 points [-]

Which theory has more information?

  • All crows are black
  • All crows are black except <270 pages specifying the exceptions>
Comment author: PaulAlmond 28 August 2010 10:20:07PM 1 point [-]

I didn't say you ignored previous correspondence with reality, though.

Comment author: [deleted] 28 August 2010 09:23:10PM 1 point [-]

I agree with most of that, but why favor less information content? Though I may not fully understand the math, this recent post by cousin it seems to be saying that priors should not always depend on Kolmogorov complexity.

And, even if we do decide to favor less information content, how much emphasis should we place on it?

In response to comment by [deleted] on Open Thread, August 2010
Comment author: PaulAlmond 28 August 2010 10:06:29PM 1 point [-]

In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.

Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:21:21PM 17 points [-]

I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.

And in case it wasn't clear, the problem I was trying to point out was simply with having forbidden conclusions - not forbidden by observation per se, but forbidden by forbidden psychology - and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.

I s'pose I might be crazy, but you all are putting your craziness right up front. You can't extract milk from a stone!

Comment author: PaulAlmond 28 August 2010 09:55:00PM *  3 points [-]

Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?

  1. Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
  2. If you save the world, you will be about the most famous person ever in the future.
  3. Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
  4. Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
  5. Therefore you are almost certainly an AI, and none of the rest of us are here - except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
Comment author: [deleted] 28 August 2010 03:35:04PM *  2 points [-]

Followup to: Making Beliefs Pay Rent in Anticipated Experiences

In the comments section of Making Beliefs Pay Rent, Eliezer wrote:

I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

If I am interpreting this correctly, Eliezer is saying that there is a nearly infinite space of unfalsifiable hypotheses, and so our priors for each individual hypothesis should be very close to zero. I agree with this statement, but I think it raises a philosophical problem: doesn't this same reasoning apply to any factual question? Given a set of data D, there must be an nearly infinite space of hypotheses that (a) explain D and (b) make predictions (fulfilling the criteria discussed in Making Beliefs Pay Rent). Though Occam's Razor can help us to weed out a large number of these possible hypotheses, a mind-bogglingly large number would still remain, forcing us to have a low prior for each individual hypothesis. (In philosophy of science, this is known as "underdetermination.") Or is there a flaw in my reasoning somewhere?

In response to comment by [deleted] on Open Thread, August 2010
Comment author: PaulAlmond 28 August 2010 05:37:53PM 1 point [-]

Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypotheses that can be taken reasonably seriously, and the remainder being unlikely - and progressively more unlikely as n increases, so that when n is sufficiently large, we can, practically, dismiss any hypotheses.

Comment author: PaulAlmond 27 August 2010 01:55:56AM *  0 points [-]

All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.

The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it wouldn't persuade you any more.

It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more - you are just claiming that that even matters - that the capability to do something to a system detracts from other features.

The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.

The "unplug" one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?

All I see here is a lot of claims that being able to do something with a certain type of system - which has been deliberately set up to make it easy to do things with it - makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.

Comment author: PaulAlmond 27 August 2010 02:04:41AM 0 points [-]

As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "simulated" one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don't emerge from it anymore (by using nuclear devices or turning off the power).

Comment author: Perplexed 27 August 2010 01:29:39AM 1 point [-]

there is no respect in which you can draw a line and say "They are not the same kind of system." - or at least any line such drawn will be arbitrary.

But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.

Comment author: PaulAlmond 27 August 2010 01:55:56AM *  0 points [-]

All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.

The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it wouldn't persuade you any more.

It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more - you are just claiming that that even matters - that the capability to do something to a system detracts from other features.

The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.

The "unplug" one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?

All I see here is a lot of claims that being able to do something with a certain type of system - which has been deliberately set up to make it easy to do things with it - makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.

Comment author: Perplexed 27 August 2010 12:45:15AM 1 point [-]

What does consciousness have to do with it? It doesn't matter whether I am simulating minds or simulating bacteria. A simulation is not a reality.

Comment author: PaulAlmond 27 August 2010 01:01:15AM *  1 point [-]

There isn't a clear way in which you can say that something is a "simulation", and I think that isn't obvious when we draw a line in a simplistic way based on our experiences of using computers to "simulate things".

Real things are arrangements of matter, but what we call "simulations" of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say "They are not the same kind of system." - or at least any line such drawn will be arbitrary.

I refer you to:

Almond, P., 2008. Searle's Argument Against AI and Emergent Properties. Available at: http://www.paul-almond.com/SearleEmergentProperties.pdf or http://www.paul-almond.com/SearleEmergentProperties.doc [Accessed 27 August 2010].

Comment author: Perplexed 26 August 2010 11:07:02PM 1 point [-]

So it seems that you simply don't take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.

I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.

Comment author: PaulAlmond 26 August 2010 11:51:47PM 1 point [-]

I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.

Comment author: inklesspen 26 August 2010 10:40:43PM 0 points [-]

All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.

As for when the harm occurs, that's nebulous concept hanging on the meaning of 'harm' and 'occurs'. In Dan Simmons' Hyperion Cantos, there is a method of execution called the 'Schrodinger cat box'. The convict is placed inside this box, which is then sealed. It's a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict's death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.

I would argue that the 'harm' of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I'd say 'potential harm' is created, which will be actualized at an unknown time. If the convict's friends somehow rescue him from the box, this potential harm is averted, but I don't think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.

If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.

In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.

Your second link was discussing simulating the same personality repeatedly, which I don't think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.

Comment author: PaulAlmond 26 August 2010 10:51:29PM 0 points [-]

What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?

Comment author: Pavitra 26 August 2010 10:18:56PM 0 points [-]

"The reason you feel confused is because you assume the universe must have a simple explanation.

The minimum message length necessary to describe the universe is long -- long enough to contain a mind, which in fact it does. There is no fundamental reason why the Occamian prior must be appropriate. It so happens that Allah has chosen to create a world that, to a certain depth, initially appears to follow that law, but Occam will not take you all the way to the most fundamental description of reality.

I could write out the actual message description, but to demonstrate that the message contains a mind requires volumes of cognitive science that have not been developed yet. Since both the message and the proof of mind will be discovered by science within the next hundred years, I choose to spend my limited time on earth in other areas."

Comment author: PaulAlmond 26 August 2010 10:24:32PM -1 points [-]

Do you think that is persuasive?

View more: Prev | Next