Comment author: Pavitra 26 August 2010 08:43:48PM 1 point [-]

Fair enough. But if we're doing that, I think the original question with the Omega machine abstracts too much away. Let's consider the kind of evidence that we would actually expect to see if Islam were true.

Let us stipulate that, on the 1st of Muḥarram, a prominent ayatollah claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts the validity of the Qur'an as holy scripture and of Allah as the one God.

There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you'd like to ask?

Comment author: PaulAlmond 26 August 2010 09:20:11PM *  4 points [-]

I'll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I'm not intending any mockery by this: It is to make a point about the claims and the evidence:

"Let us stipulate that, on Paris Hilton's birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that Paris Hilton is a super-powerful being sent here from another world, co-existing in space with ours but at a different vibrational something or whatever. Paris Hilton has come to show us that celebrity can be fun. The entire universe is built on celebrity power. Madonna tried to teach us this when she showed us how to Vogue but we did not listen and the burden of non-celebrity energy threatens to weigh us down into the valley of mediocrity when we die instead of ascending to a higher plane where each of us gets his/her own talkshow with an army of smurfs to do our bidding. Oh, and Sesame Street is being used by the dark energy force to send evil messages into children's feet. (The brain only appears to be the source of consciousness: Really it is the feet. Except for people with no feet. (Ah! I bet you thought I didn't think of that.) Today's lucky food: custard."

There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you'd like to ask?"

The point I am making here is that the above narrative is absurd, and even if he can demonstrate some unusual ability with predictions or NP problems (and I admit the NP problems would really impress me), there is nothing that makes that explanation more sensible than any number of other stupid explanations. Nor does he have an automatic right to be believed: His explanation is just too stupid.

Comment author: Pavitra 26 August 2010 08:43:48PM 1 point [-]

Fair enough. But if we're doing that, I think the original question with the Omega machine abstracts too much away. Let's consider the kind of evidence that we would actually expect to see if Islam were true.

Let us stipulate that, on the 1st of Muḥarram, a prominent ayatollah claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts the validity of the Qur'an as holy scripture and of Allah as the one God.

There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you'd like to ask?

Comment author: PaulAlmond 26 August 2010 08:57:06PM 1 point [-]

Yes - I would ask this question:

"Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind - a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?"

He had better have a good answer to that: Otherwise I don't care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.

Comment author: inklesspen 26 August 2010 03:40:45PM 0 points [-]

No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in "return". A host's duty to his guests doesn't go away just because that host had a poor experience when he himself was a guest at some other person's house.

If our simulators don't care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.

If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.

If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.

Of course, there's always the possibility that simulations may be much more similar than we think.

Comment author: PaulAlmond 26 August 2010 04:42:18PM *  2 points [-]

But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"?

Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entities already decided back in the past.

The Newcomb's boxes paradox is essentially about reference class - it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you - and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.

Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience - in reality - it was more that your behavior was dictated by being part of the reference class - but you don't experience the making of decisions from that perspective). Same for being unethical.

You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos - such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality - even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your "effect" on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.

I want to be clear here that I am under no illusion that there is some kind of "magical causal link". We might say that this is about how our decisions are really determined anyway. Deciding as if "the decision" influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if "your decision" affects anything else in everyday life - when in fact, your decision is determined by outside things.

This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn't deliberate.

One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.

P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right - only a bit, I admit.)

Comment author: Perplexed 26 August 2010 12:44:41AM 2 points [-]

I don't see this. Why assume that the non-contingent, pre-existing God is particularly complex. Why not assume that the current complexity of God (if He actually is complex) developed over time as the universe evolved since the big bang. Or, just as good, assume that God became complex before He created this universe.

It is not as if we know enough about God to actually start writing down that presumptive long bit string. And, after all, we don't ask the big bang to explain the coastline of Great Britain.

Comment author: PaulAlmond 26 August 2010 12:58:08AM 1 point [-]

If we do that, should we even call that "less complex earlier version of God" God? Would it deserve the title?

Comment author: byrnema 26 August 2010 12:20:24AM 0 points [-]

The problem is that reality itself is apparently fundamentally non-contingent. Adding "mind" to all that doesn't seem so unreasonable.

Comment author: PaulAlmond 26 August 2010 12:30:43AM 0 points [-]

Do you mean it doesn't seem so unreasonable to you, or to other people?

Comment author: Furcas 25 August 2010 11:37:01PM 1 point [-]

I don't think that's true; cousin_it had it right the first time. The complexity of Islam is the complexity of a reality that contains an omnipotent creator, his angels, Paradise, Hell, and so forth. Everything we've observed about the universe includes people believing in Islam, but not the beings and places that Islam says exist.

In other words, E contains Islam the religion, not Islam the reality.

Comment author: PaulAlmond 25 August 2010 11:42:16PM 2 points [-]

The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God's/Allah's, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can't use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty - unless the theist refuses even to play by these rules and thinks God is above all that - in which case they aren't even playing the theory selection game.

Comment author: Kingreaper 23 August 2010 11:34:55PM 0 points [-]

Yes. Of course, the part of them that is unconstrained IS Omega.

I'm just not sure about the relevance of this?

Comment author: PaulAlmond 23 August 2010 11:45:46PM 1 point [-]

Just that the scenario could really be considered as just adding an extra component onto a being - one that has a lot of influence on his behavior.

Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system "you".

What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a "you which has been modified".

When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that "you" has been extended, and that the compound entity is now "you".

The scenario, as I understand it doesn't really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn't specifically disallow the idea that the "you" that it is about gets modified in the process.

Now, if you want to edit the scenario to specify exactly what the "you" is here...

Comment author: Kingreaper 23 August 2010 11:27:14PM *  0 points [-]

Except that that's not the person the question is being directed at. I'm not "amalgam-Kingreaper-and-Omega" at the moment. Asking what that person would do would garner completely different responses.

For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.

Comment author: PaulAlmond 23 August 2010 11:32:11PM 0 points [-]

"Except that that's not the person the question is being directed at."

Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?

Comment author: Kingreaper 23 August 2010 11:13:24PM 0 points [-]

In the Omega-composite scenario, the composite entity is clearly making the decisions.

In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is.

"If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?"

We could look at your own brain in these terms and ask about removing parts of it.

Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.

Comment author: PaulAlmond 23 August 2010 11:17:21PM 2 points [-]

The point, here, is that in the scenario in which Omega is actively manipulating your brain "you" might mean something in a more extended sense and "some part of you" might mean "some part of Omega's brain".

Comment author: Kingreaper 23 August 2010 10:52:31PM *  -1 points [-]

"I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome."

This doesn't make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.

No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome.

If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn't consider that move its choice.

If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?

Comment author: PaulAlmond 23 August 2010 11:03:51PM 1 point [-]

Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.

Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.

Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.

Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega's brain, form a composite system which is causing your behavior - and that this composite system makes decisions just like any other system.

"If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?"

We could look at your own brain in these terms and ask about removing parts of it.

View more: Prev | Next