Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ete 17 October 2013 01:47:32AM *  0 points [-]

I've read all posts in the Basic Quantum Mechanics section, plus many of the links from it and a handful of others (working through the rest, I'm still only three days into this). Quantum mechanics is something I've had vague explanations of from education and discussions with educated people, but it seemed extremely complicated and confusing due to almost precisely the issues touched on in the normal way it's taught. Thank you for putting down the steps needed to walk me through rewriting my basic assumptions of reality to more accurately reflect how reality likely works, it's been very fun and interesting. I'm starting to feel like a native of the quantum universe, and.. it kinda makes sense. Definitely a whole lot more sense than my previous mangled understanding of probabilities and wave/particle duality. Having a base level reality which works very differently from high level phenomenon which feel more intuitive does not seem like a great surprise.

Anyway, one idea I've had which seems interesting to me, but I am not yet in knowledgeable to evaluate properly and would like thoughts on:

Would you, under the many worlds interpretation, be able to experimentally test whether a universe is infinite in time but not space?

I know that infinite time+finite space not a favored model for cosmology currently, but it's still interesting to me if quantum physics testably disproves a whole class of possible universes. And if by this (or similar) reasoning an infinite time/finite space universe is found to be incompatible with many worlds, finding evidence extremely strong evidence of an infinite time/finite space universe (highly unlikely as I understand it) would perhaps bring many worlds into question.

Possible line of reasoning:

  1. In a universe with finite space, there is a finite configuration space (finite amount of physical space, so finite possible universal states).
  2. Any particular blob of amplitude/branch/world will eventually evolve into a state of/near maximum entropy.
  3. Maximum entropy is not entirely stable even if no work can be extracted from it, so it is not a static point in configuration space.
  4. A non-static point in finite configuration space left to move for infinite time will eventually visit all possible arrangements of amplitude (configurations), infinite times. This includes Configuration A, which can be any possible point in configuration space.
  5. In both (particle left, sensor measures LEFT, human sees "LEFT") and (particle right, sensor measures RIGHT, human sees "RIGHT") blobs of amplitude, the universe evolves differently for a vast amount of time after the heat death of the universe, but given infinite time will at some point reach Configuration A with probability 1.
  6. Since both blobs of amplitude will, despite diverging for an unimaginable length of time, arrive at the same configuration as each other with probability 1, they are fully coherent allowing them to interact, and this is testable (and already falsified).

Points one, three, and four seems to me like the most likely weak link, but I'd be interested to know why this is not the case if it is indeed not the case. Perhaps at maximum-entropy each branch gets stuck in a unique infinite loop rather than visiting the rest of configuration space?

If the chain of reasoning holds and leads to the conclusions.. perhaps a stronger version of this argument could perhaps be constructed for a universe infinite in both time and space (depending on whether indefinitely expanding thermodynamic systems will reach all possible configurations given infinite time), though I'm already feeling somewhat out of my depth dealing with the weaker argument.

Comment author: Zaq 24 May 2017 01:38:25AM 0 points [-]

I see three distinct issues with the argument you present.

First is line 1 of your reasoning. A finite universe does not entail a finite configuration space. I think the cleanest way to see this is through superposition. If |A> and |B> are two orthogonal states in the configuration space, then so are all states of the form a|A> + b|B>, where a and b are complex numbers with |a|^2 + |b|^2 = 1. There are infinitely many such numbers we can use, so even from just two orthogonal states we can build an infinite configuration space. That said, there's something called Poincare recurrence which is sort of what you want here, except...

Line 4 is in error. Even if you did have a finite configuration space, a non-static point could just evolve in a loop, which need not cover every element of the configuration space. Two distinct points could evolve in loops that never go anywhere near each other.

Finally, even if you could guarantee that two distinct points would each eventually evolve through some common point A, line 6 does not necessarily follow because it is technically possible to have a situation where both evolutions do in fact reach A infinitely many times, but never simultaneously. Admittedly though, it would require fine-tuning to ensure that two initially-distinct states never hit "nearly A" at the same time, which might be enough.

Comment author: VAuroch 27 November 2013 09:10:08AM -1 points [-]

You are wrong, and I will explain why.

If you "have ((◻C)->C)", that is an assertion/assumption that ◻((◻C)->C). By Loeb's theorem, It implies that ◻C. This is different from what you wrote, which claims that ((◻C)->C) implies ◻C.

Comment author: Zaq 12 May 2017 04:38:47PM 0 points [-]

Wow. I've never run into a text using "we have" as assuming something's provability, rather than assuming its truth.

So the application of the deduction theorem is just plain wrong then? If what you actually get via Lob's theorem is ◻((◻C)->C) ->◻C, then the deduction theorem does not give the claimed ((◻C)->C)->C, but instead gives ◻((◻C)->C)->C, from which the next inference does not follow.

Comment author: rkyeun 08 December 2015 11:41:08PM *  1 point [-]

You've only moved the problem down one step.

Moving the problem down one step puts it at the bottom.

The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

One half of you should have one, and the other half should have the other. You should be aware intellectually that it is only the disconnect between your two halves' brains not superimposing which prevents you from having both experiences in a singular person, and know that it is your physical entanglement with the fired particle which went both ways that is the cause. There's nothing to post-dict. The phenomenon is not merely explained, but explained away. The particle split, on one side there is a you that saw it split right, on one side there is a you that saw it split left, and both of you are aware of this fact, and aware that the other you exists on the other side seeing the other result, because the particle always goes both ways and always makes each of you. There is no more to explain. You are in all branches, and it is not mysterious that each of you in each branch sees its branch and not the others. And unless some particularly striking consequence happened, all of them are writing messages similar to this, and getting replies similar to this.

Comment author: Zaq 13 April 2016 04:57:35AM *  1 point [-]

The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of all explanations of the phenomenon.

Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-splitter result" game in many-worlds terminology and compare that to a "guess the trillionth digit of pi" game in the same terminology.

Aside: Technically it's the amplitudes that split in many-worlds models, and somehow these amplitudes are multiplied by their complex conjugates to get you answers to questions about guessing games (no model has an explanation for that part). As is common around these parts, I'm going to ignore this and talk as if it's the probabilities themselves that split. I guess nobody likes writing "square root" all the time.

Set up a 50/50 beam-splitter. Put a detector in one path and block the other. Write your choice of "Detected" or "Not Detected" on a piece of paper. Now fire a single photon. In Everett-speak, half of the yous end up in branches where the photon's path matches your guess while half of the yous don't. The 50/50 nature of this split remains even if you know the exact quantum state of the photon beforehand. Furthermore, the branching of yous that try to use all your physics knowledge to predict their observations have no larger a proportion of success than the branching of yous that make their predictions by just flipping a coin, always guessing Fire, or employing literally any other strategy that generates valid guesses. The 50/50 value of this branching process is completely decoupled from your predictions, no matter what information you use to make those predictions.

Compare this to the process of guessing the trillionth digit of pi. If you make your guess by rolling a quantum die, then 1 out of 10 yous will end up in a branch where your guess matches the actual trillionth digit of pi. If you instead use those algorithms you know to calculate a guess, and you code/run them correctly, then basically all of the yous end up in a branch where your guess is correct.

We now see the fundamental difference. Changing your guessing strategy results in different correct/incorrect branching ratios for the "guess the trillionth digit of pi" game but not for the "guess the beam-splitter result" game. This is the Everett-speak version of saying that the beam-splitter's 50/50 odds is a property of the universe while the trillionth digit of pi's 1/10 odds is a function of our (current) ignorance. You can opt to replace "odds" with "branching ratios" and declare that there is no probability of any kind, but that just seems like semantics to me. In particular the example of the ten trillionth digit of pi should not be what prompts this decision. Even in the many-worlds model there's still a fundamental difference between that and the quantum processes that physicists cite as intrinsically random.

Comment author: Zaq 11 April 2016 09:25:11PM 0 points [-]

I two-box.

Three days later, "Omega" appears in the sky and makes an announcement. "Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species' reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye."

Giant laser beam then obliterates earth. I die wishing I'd done more to warn the world of this highly-improbable threat.

TLDR: I don't buy this post's argument that I should become the type of agent that sees one-boxing on Newcomb-like problems as rational. It is trivial to construct any number of no-less plausible scenarios where a superintelligence descends from the heavens and puts a few thousand people through Newcomb's problem before suddenly annihilating those who one-box. The presented argument for becoming the type of agent that Omega predicts will one-box can be equally used to argue for becoming the type of agent that Alpha predicts will two-box. Why then should it sway me in either direction?

Comment author: Zaq 05 February 2016 12:14:59AM 0 points [-]

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day because the early universe was small. As the universe expands, its maximum entropy grows. Its realized entropy also grows, just not as fast as its maximal entropy.

Comment author: Zaq 22 October 2015 11:13:25PM *  1 point [-]

"Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters."

This doesn't completely resolve my concern here, as there are infinitely many possible Turing machines. If you pick one and I'm free to pick any other, is there a bound on the length of the compiler? If not, then I don't see how the compiler length placing a bound on any specific change in Turing machine makes the problem of which machine to use irrelevant.

To be clear: I am aware that starting with different machines, the process of updating on shared observations will eventually lead us to similar distributions even if we started with wildly different priors. My concern is that if "wildly different" is unbounded then "eventually" might also be unbounded even for a fixed value of "similar." If this does indeed happen, then it's not clear to me how S I does anything more useful than "Pick your favorite normalized distribution without any 0s or 1s and then update via Bayes."

Edit: Also thanks for the intro. It's a lot more accessible than anything else I've encountered on the topic.

Comment author: rkyeun 29 December 2014 10:52:35PM *  2 points [-]

Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time.

No, I see it firing both ways every time. In one world, I see it going left, and in another I see it going right. But because these very different states of my brain involve a great many particles in different places, the interactions between them are vanishingly nonexistent and my two otherworld brains don't share the same thought. I am not aware of my other self who has seen the particle go the other way.

You may say that the electron goes both ways every time, but we still only have the detector firing half the time.

We have both detectors firing every time in the world which corresponds to the particle's path. And since that creates a macroscopic divergence, the one detector doesn't send an interference signal to the other world.

We also cannot predict which half of the trials will have the detector firing and which won't.

We can predict it will go both ways each time, and divide the world in twain along its amplitude thickness, and that in each world we will observe the way it went in that world. If we are clever about it, we can arrange to have all particles end in the same place when we are done, and merge those worlds back together, creating an interference pattern which we can detect to demonstrate that the particle went both ways. This is problematic because entanglement is contagious, and as soon as something macroscopic becomes affected putting Humpty Dumpty back together again becomes prohibitive. Then the interference pattern vanishes and we're left with divergent worlds, each seeing only the way it went on their side, and an other side which always saw it go the other way, with neither of them communicating to each other.

And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

Correct. There are no hidden variables. It goes both ways every time. The dice are not invisible as they roll. There are instead no dice.

Comment author: Zaq 01 October 2015 08:27:32PM 0 points [-]

You've only moved the problem down one step.

Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

Take the me sitting here right now, with the memory of the specific half of the clicks he has right now. As far as we understand physics, he can't postdict which memory that should have been. Even in your model, he can postdict that there will be many branches of him with each possible memory, but he can't postdict which of those branches he'll be - only the probability of him being any one of the branches.

Comment author: Zaq 28 October 2014 01:07:01AM 26 points [-]

Did the survey, except digit ratio due to lack of precision measuring devices.

As for feedback, I had some trouble interpreting a few of the questions. There were some times when you defined terms like human biodiversity, and I agreed with some of the claims in the definition but not others, but since I had no real way to weight the claims by importance it was difficult for me to turn my conclusions into a single confidence measurement. I also had no idea weather the best-selling computer game question was supposed to account for inflation or general growth of the videogame market, nor whether we were measuring in terms of copies sold or revenue earned or something else entirely, nor whether console games or games that "sell" for 0$ counted. I ended up copping out by listing a game that is technically included in a bit of software I knew sold very well for its time (and not for free), but the software was not sold as a computer game.

Also, a weird thing happened with the calibration questions When I was very unsure which of a large number of possible answers was correct, and especially if I wasn't even sure how many possible answers there were, I found myself wanting to write an answer that was obviously impossible (like writing "Mars" for Obama's birth state) and putting a 0 for the calibration. I didn't actually do this, but it sure was tempting.

Comment author: rkyeun 14 August 2012 06:37:22PM 1 point [-]

Dr. Many the Physicist would be wrong about the electron too. The electron goes both ways, every time. There's no chance involved there either.

But you're right, it is not the ten trillionth digit of pi that proves it.

Comment author: Zaq 27 October 2014 11:08:57PM 0 points [-]

The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

In response to Occam's Razor
Comment author: Nick_Tarleton 27 September 2007 12:16:02AM 6 points [-]

In science you do not get two theories

You're right - there are an infinite number of theories consistent with any set of observations. Any set. All observed facts are technically consistent with the prediction that gravity will reverse in one hour, but nobody believes that because of... Occam's Razor!

Comment author: Zaq 15 April 2014 06:22:37PM *  0 points [-]

I don't think this is what's actually going on in the brains of most humans.

Suppose there were ten random people who each told you that gravity would be suddenly reversing soon, but each one predicted a different month. For simplicity, person 1 predicts the gravity reversal will come in 1 month, person 2 predicts it will come in 2 months, etc.

Now you wait a month, and there's no gravity reversal, so clearly person 1 is wrong. You wait another month, and clearly person 2 is wrong. Then person 3 is proved wrong, as is person 4 and then 5 and then 6 and 7 and 8 and 9. And so when you approach the 10-month mark, you probably aren't going to be expecting a gravity-reversal.

Now, do you not suspect the gravity-reversal at month ten simply because it's not as simple as saying "there will never a be a gravity reversal," or is your dismissal substantially motivated by the fact that the claim type-matches nine other claims that have already been disproven? I think that in practice most people end up adopting the latter approach.

View more: Next