Comment author: endoself 24 November 2013 06:31:11AM 21 points [-]

I took the census. My answers for MWI and Ailens were conditional on ¬Simulation, since if we are in a simulation where MWI doesn't hold, the simulation is probably intended to provide information about a universe in which MWI does hold.

Comment author: endoself 18 November 2013 07:06:22AM 8 points [-]

I'm not sure what quantum mechanics has to do with this. Say humanity is spread over 10 planets. Would you rather take a logical 9/10 chance of wiping out humanity, or destroy 9 of the planets with certainty (and also destroy 90% of uninhabited planets to reduce the potential for future growth by the same degree)? Is there any ethically relevant difference between these scenarios?

Comment author: Stuart_Armstrong 15 November 2013 10:28:47AM *  1 point [-]

From this description, it seems that P is described as essentially omniscient. It knows the locations and velocity of every particle in the universe, and it has unlimited computational power.

It has pretty unlimited computational power, but doesn't know locations and velocities of particles. When fed with S, it has the noisy info about one slice of the universe.

I see no reason that P could not hypothetically reverse the laws of physics and thus would always return 1 or 0 for any statement about reality.

That's not a problem - even if P is omniscient, P' still has to estimate it's expected output from its own limited perspective. As long as this estimate is reasonable, the omniscience of P doesn't cause a problem (and remember that P is fed noisy data).

Of course, you could add noise to the inputs to P

Yes, the data S is noisy. The amount of noise needs to be decided upon, but as long as we don't but stupid amounts of noise, the default error is "P' concludes P is too effective, can distinguish very well between X and ¬X, so the AI does nothing (ie its entire motivation reduces to minimising the penalty function as much as it can)".

Comment author: endoself 18 November 2013 04:07:09AM 0 points [-]

even if P is omniscient, P' still has to estimate it's expected output from its own limited perspective. As long as this estimate is reasonable, the omniscience of P doesn't cause a problem (and remember that P is fed noisy data).

Don't you have to get the exact level of noise that will prevent the AI from hiding from P without letting P reconstruct the AI's actions if it does allow itself to be destroyed? An error in either direction can be catastrophic. If the noise is to high, the AI takes over the world. If the noise is to low, E'(P(Sᵃ|X,Oᵃ,B)/P(Sᵃ|¬X,Õᵃ,B) | a) is going to be very far from 1 no matter what, so there is no reason to expect that optimizing it is still equivalent to reducing impact.

Comment author: EHeller 02 November 2013 06:47:11PM *  6 points [-]

There is also the possibility that they believe that MIRI/FHI/CEA/CFAR will have no impact on the intelligence explosion or the far future.

Comment author: endoself 02 November 2013 08:25:58PM 4 points [-]

He's talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.

Comment author: Eliezer_Yudkowsky 10 August 2013 03:27:23AM 5 points [-]

(Consults Inverse Chaitin function in Wolfram Alpha.)

Actually, is there a definition of Chaitin's Omega for particular programs? I thought it was just for universal Turing machines, or program classes with a measure on them anyway.

Comment author: endoself 18 October 2013 11:16:23PM 1 point [-]

Yes, you can take the probability that they will halt given a random input. This is analogous to the case of a universal Turing machine, since the way we ask it to simulate a random Turing machine is by giving it a random input string.

In response to comment by endoself on The Cause of Time
Comment author: johnswentworth 06 October 2013 03:14:27PM 0 points [-]

Exactly! We want to incorporate the association information using Bayes theorem. If you have zero information about the mapping, then your knowledge is invariant under permutations of the data sets (e.g., swapping T0 with T1). That implies that your prior over the associations is uniform over the possible permutations (note that a permutation uniquely specifies an association and vice versa). So, when calculating the correlation, you have to average over all permutations, and the correlation turns out to be identically zero for all possible data. No association means no correlation.

So in the zero information case, we get this weird behavior that isn't what we expect. If the zero information case doesn't work, then we can't expect to get correct answers with only partial information about the associations. We can expect similar strangeness when trying to deal with partial information based on priors about side-effects caused by our hypothetical drug.

If we don't have enough information to construct the model, then our analysis should yield inconclusive results, not weird or backward results. So the problem is to figure out the right way to handle association information.

Comment author: endoself 06 October 2013 10:48:44PM *  0 points [-]

Yes, but this is a completely different matter than your original post. Obviously this is how we should handle this weird state of information that you're constructing, but it doesn't have the causal interpretation you give it. You are doing something, but it isn't causal analysis. Also, in the scenario you describe, you have the association information, so you should be using it.

In response to comment by endoself on The Cause of Time
Comment author: johnswentworth 06 October 2013 05:14:58AM *  1 point [-]

Causal networks do not make an iid assumption. Consider one of the simplest examples, in which we examine experimental data. Some of the variables are chosen by the experimenter. They can be chosen any way the experimenter pleases, so long as they vary. The process is the same, but that does not imply iid observations. It just means that time dependence must enter through the variables. As you say, it is not built in to the framework.

The problem is to reduce the phrase "the different measurements of each variable are associated because they come from the same sample of the causal process." What is a sample? How do we know two numbers (or other strings) came from the same sample? Since the association contains information separate from the values themselves, how can we incorporate that information into the framework explicitly? How can we handle uncertainty in the association apart from uncertainty in the values of the variables?

Comment author: endoself 06 October 2013 06:57:05AM *  0 points [-]

Causal networks do not make an iid assumption.

Yeah, I guess that's way too strong; there are a lot of alternative assumptions also that justify using them.

What is a sample? How do we know two numbers (or other strings) came from the same sample?

I think we just have to assume this problem solved. Whenever we use causal networks in practice, we know what a sample is. You can try to weaken this and see if you still get anything useful, but this is very different then 'conditioning on time' as you present in the post.

Since the association contains information separate from the values themselves, how can we incorporate that information into the framework explicitly?

Bayes theorem? If we have a strong enough prior and enough information to reverse-engineer the association reasonably well, then we might be able to learn something. If you're running a clinical trial and you recorded which drugs were given out, but not to which patients, then you need other information, such as a prior about which side-effects they cause and measurements of side-effects that are associated with specific patients. Otherwise you just don't have the data necessary to construct the model.

In response to The Cause of Time
Comment author: endoself 06 October 2013 04:07:16AM *  2 points [-]

In fact, in order to truly ignore time data, we cannot even order the points according to time! But that means that we no longer have any way to line up the points T0 with e0, T1 with e1, etc.

What? This makes no sense.

I guess you haven't seen this stated explicitly, but the framework of causal networks makes an iid assumption. The idea is that the causal network represents some process that occurs a lot, and we can watch it occur until we get a reasonably good understanding of the joint distribution of variables. Part of this is that it the same process occurring, so there is no time dependence built into the framework.

For some purposes, we can model time by simply including it as an observed variable, which you do in this post. However, the different measurements of each variable are associated because they come from the same sample of the (iid) causal process, whether or not we are conditioning on time. The way you are trying to condition on time isn't correct, and the correlation does exists in both cases. (Really, we care about dependence rather than correlation, but it doesn't make a difference here.)

I do think that this is a useful general direction of analysis. If the question is meaningful at all, then the answer is probably that given by Armok_GoB in the original thread, but it would be useful to clarify what exactly the question means. There is probably a lot of work to be done before we really understand such things, but I would advise you to better understand the ideas behind causal networks before trying to contribute.

Comment author: bokov 26 September 2013 04:06:14AM 0 points [-]

Identical. Therefore consciousness adds complexity without actually being necessary for explaining anything. Therefore, the presumption is that we are all philosophical zombies (but think we're not).

Comment author: endoself 26 September 2013 04:55:58AM *  4 points [-]
Comment author: bokov 26 September 2013 12:20:39AM 2 points [-]

If I then learn about timeless quantum physics and realize there's no such thing as the past anyway, and certainly not pasts that lead to particular futures, I'd settle for a world with a lower entropy, in which a relatively high number of Feynman paths reach here.

Funny you should say that. I, for one, have the terminal value of continued personal existence (a.k.a. being alive). On LW I'm learning that continuity, personhood, and existence might well be illusions. If that is the case, my efforts to find ways to survive amount to extending something that isn't there in the first place

Of course there's the high probability that we're doing the philosophical equivalent of dividing by zero somewhere among our many nested extrapolations.

But let's say consciousness really is an illusion. Maybe the take-home lesson is that our goals all live at a much more superficial level than we are capable of probing. Not that reductionism "robs" us of our values or anything like that... but it may mean that cannot exist an instrumentally rational course of action that is also perfectly epistemically rational. That being less wrong past some threshold will not help us set better goals for ourselves, only get better at pursuing goals we pre-committed to pursuing.

Comment author: endoself 26 September 2013 04:53:55AM *  1 point [-]

I, for one, have the terminal value of continued personal existence (a.k.a. being alive). On LW I'm learning that continuity, personhood, and existence might well be illusions. If that is the case, my efforts to find ways to survive amount to extending something that isn't there in the first place

I am confused about this as well. I think the right thing to do here is to recognize that there is a lot we don't know about, e.g. personhood, and that there is a lot we can do to clarify our thinking on personhood. When we aren't confused about this stuff anymore, we can look over it and decide what parts we really valued; our intuitive idea of personhood clearly describes something, even recognizing that a lot of the ideas of the past are wrong. Note also that we don't gain anything by remaining ignorant (I'm not sure if you've realized this yet).

View more: Prev | Next