The point is that one global hidden variable could account for every local hidden variable; and saying that "our universe is one of many possible universes" is equivalent to saying that there is some choice "somewhere at the bottom."
My point is there's no need for collapse, just deterministic mechanisms that are distributed as though random.
Taking one more stab at rephrasing: We must live in some possible universe; presumably out of many. This one universe is entirely deterministic, but no one knows exactly which one it is so our expectations seem random.
Hmmm... would this mean that quantum events aren't "random", they're pseudorandom, like the "random number generators" used by computers? You can predict them if you know the algorithm used to generate them and the seed value, but if you don't know them, it's very hard to figure out what they were.
For example, here's a string of pseudorandom digits:
718281828459045235360287471352662497757247093699959574966967627724076630353
If you don't recognize the algorithm used to generate them, it looks like the kind of random digits you get by rolling an unbiased ten-sided die.
Sort of a response to: Collapse Postulate
Abstract: There are phenomena in mathematics where certain structures are distributed "at random;" that is, statistical statements can be made and probabilities can be used to predict the outcomes of certain totally deterministic calculations. These calculations have a deep underlying structure which leads a whole class of problems to behave in the same way statistically, in a way that appears random, while being entirely deterministic. If quantum probabilities worked in this way, it would not require collapse or superposition.This is a post about physics, and I am not a physicist. I will reference a few technical details from my (extremely limited) research in mathematical physics, but they are not necessary to the fundamental concept. I am sure that I have seen similar ideas somewhere in the comments before, but searching the site for "random + determinism" didn't turn much up so if anyone recognizes it I would like to see other posts on the subject. However my primary purpose here is to expose the name "Deep Structure Determinism" that jasonmcdowell used for it when I explained it to him on the ride back from the Berkeley Meetup yesterday.
Again I am not a physicist; it could be that there is a one or two sentence explanation for why this is a useless theory--of course that won't stop the name "Deep Structure Determinism" from being aesthetically pleasing and appropriate.
For my undergraduate thesis in mathematics, I collected numerical evidence for a generalization of the Sato-Tate Conjecture. The conjecture states, roughly, that if you take the right set of polynomials, compute the number of solutions to them over finite fields, and scale by a consistent factor, these results will have a probability distribution that is precisely a semicircle.
The reason that this is the case has something to do with the solutions being symmetric (in the way that y=x2 if and only if y=(-x)2 is a symmetry of the first equation) and their group of symmetries being a circle. And stepping back one step, the conjecture more properly states that the numbers of solutions will be roots of a certain polynomial which will be the minimal polynomial of a random matrix in SU2.
That is at least as far as I follow the mathematics, if not further. However, it's far enough for me to stop and do a double take.
A "random matrix?" First, what does it mean for a matrix to be random? And given that I am writing up a totally deterministic process to feed into a computer, how can you say that the matrix is random?
A sequence of matrices is called "random" if when you integrate of that sequence, your integral converges to integrating over an entire group of matrices. Because matrix groups are often smooth manifolds they are designed to be integrated over, and this ends up being sensible. However a more practical characterization, and one that I used in the writeup for my thesis, is that if you take a histogram of the points you are measuring, the histogram's shape should converge to the shape of the group--that is, if you're looking at the matrices that determine a circle, your histogram should look more and more like a semicircle as you do more computing. That is, you can have a probability distribution over the matrix space for where your matrix is likely to show up.
The actual computation that I did involved computing solutions to a polynomial equation--a trivial and highly deterministic procedure. I then scaled them, and stuck them in place. If I had not know that these numbers were each coming from a specific equation I would have said that they were random; they jumped around through the possibilities, but they were concentrated around the areas of higher probability.
So bringing this back to quantum physics: I am given to understand that quantum mechanics involves a lot of random matrices. These random matrices give the impression of being "random" in that it seems like there are lots of possibilities, and one must get "chosen" at the end of the day. One simple way to deal with this is postulate many worlds, wherein there no one choice has a special status.
However my experience with random matrices suggests that there could just be some series of matrices, which satisfies the definition of being random, but which is inherently determined (in the way that the Jacobian of a given elliptic curve is "determined.") If all quantum random matrices were selected from this list, it would leave us with the subjective experience of randomness, and given that this sort of computation may not be compressible, the expectation of dealing with these variables as though they are random forever. It would also leave us in a purely deterministic world, which does not branch, which could easily be linear, unitary, differentiable, local, symmetric, and slower-than-light.