Comment author: Dynamically_Linked 26 May 2008 05:52:04PM 5 points [-]

But if you could learn to visualize the relative configuration space, then, so long as you thought in terms of those elements of reality, it would no longer be imaginable that Mach's Principle could be false.

If one learned to think only in terms the relative configuration space, it would also become impossible to imagine that parity violation could be possible, since the left-hand and right-hand versions of a system have the same relative distances. Yet the weak nuclear force does violate parity.

Comment author: Dynamically_Linked 11 May 2008 10:27:09PM 2 points [-]

Eliezer, I think your (and Robin's) intuition is off here. Configuration space is so vast, it should be pretty easy for a small blob of amplitude to find a hiding place that is safe from random stray flows from larger blobs of amplitude.

Consider a small blob in my proposed experiment where the number of 0s and 1s are roughly equal. Writing the outcomes on blackboards does not reduce the integrated squared modulus of this blob, but does move it further into "virgin territory", away from any other existing blobs. In order for it to be mangled by stray flows from larger blobs, those stray flows would somehow have to reach the same neighborhood as the small blob. But how? Remember that in this neighborhood of configuration space, the blackboards have a roughly equal number of 0s and 1s. What is the mechanism that can allow a stray piece of a larger blob to reach this neighborhood and mangle the smaller blob? It can't be random quantum fluctuations, because the Born probability of the same sequence of 0s and 1s spontaneously appearing on multiple blackboards is much less than the integrated squared modulus of the small blob. To put it another way, by the time a stray flow from a larger blob reaches the small blob, its amplitude would be spread much too thin to mangle the small blob.

Comment author: Dynamically_Linked 11 May 2008 06:14:27PM 0 points [-]

Robin, can you offer some intuitive explanation as to why defense against world mangling would be difficult? From what I understand, a larger blob of amplitude (world) can mangle a smaller blob of amplitude only if they are close together in configuration space. Is that incorrect? If those "secure storage facilities" simply write the quantum coin toss outcomes in big letters on some blackboards, which worlds will be close enough to be able to mangle the worlds that violate Born's rule?

Comment author: Dynamically_Linked 11 May 2008 05:09:28PM 1 point [-]

Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude ("worlds") are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out.

Shouldn't it be possible for a tinier-than-average decoherent blobs of amplitude to deliberately become less vulnerable to interference from leakages from larger blobs, by evolving itself to an isolated location in configuration space (i.e., a point in configuration space with no larger blobs nearby)? For example, it seems that we should be able to test the mangled worlds idea by doing the following experiment:

1. Set up a biased quantum coin, so that there is a 1/4 Born probability of getting an outcome of 0, and 3/4 of getting 1. 2. After observing each outcome of the quantum coin toss, broadcast the outcome to a large number of secure storage facilities. Don't start the next toss until all of these facilities have confirmed that they've received and stored the previous outcome. 3. Repeat 100 times.

Now consider a "world" that has observed an almost equal number of 0s and 1s at the end, in violation of Born's rule. I don't see how it can get mangled. (What larger blob will be able to interfere with it?) So if mangled worlds is right, then we should expect a violation of Born's rule in this experiment. Since I doubt that will be the case, I don't think mangled worlds can be right.

Comment author: Dynamically_Linked 15 December 2007 03:48:34AM 0 points [-]

Has anyone read Learning Bayesian Networks by Richard E. Neapolitan? How does it compare with Judea Pearl's two books as an introduction to Bayesian Networks? I'm reading Pearl's first book now, but I wonder if Neapolitan's would be better since it is newer and is written specifically as a textbook.

Comment author: Dynamically_Linked 10 December 2007 03:45:00AM 1 point [-]

Eliezer, the US killed at least a million Japanese in World War 2, while the attack at Pearl Harbor killed less than 2500. Maybe it is true that the US response to 9/11 is "greater than the appropriate level, whatever the appropriate level may be" but I don't think you have showed that to actually be the case.

In response to Truly Part Of You
Comment author: Dynamically_Linked 21 November 2007 08:46:27PM 0 points [-]

So, what about the notion of mathematical proof? Anyone want to give a shot at explaining how that can be regenerated?

Comment author: Dynamically_Linked 17 November 2007 01:25:21AM 0 points [-]

The issue is replication with variation and the necessary historical consequences of this.

Evolution requires more than replication with variation. It needs differential replication with variation.

There is therefore no way to avoid the consequences of evolution: they are not biological consequences, but consequences of the laws of physics and logic. There is no way around them.

I can think of a couple of potential ways to avoid the consequences of evolution, by attacking the "differential" part.

1. The Singleton.

2. Some other method for achieving absolute security and property rights. For example a completely impenetrable shield. Or having automatic fail-proof self-destruct mechanisms built into everything to make it pointless for anyone to try to appropriate other people's property.

View more: Prev