Posts

Sorted by New

Wiki Contributions

Comments

Zaq7y00

I see three distinct issues with the argument you present.

First is line 1 of your reasoning. A finite universe does not entail a finite configuration space. I think the cleanest way to see this is through superposition. If |A> and |B> are two orthogonal states in the configuration space, then so are all states of the form a|A> + b|B>, where a and b are complex numbers with |a|^2 + |b|^2 = 1. There are infinitely many such numbers we can use, so even from just two orthogonal states we can build an infinite configuration space. That said, there's something called Poincare recurrence which is sort of what you want here, except...

Line 4 is in error. Even if you did have a finite configuration space, a non-static point could just evolve in a loop, which need not cover every element of the configuration space. Two distinct points could evolve in loops that never go anywhere near each other.

Finally, even if you could guarantee that two distinct points would each eventually evolve through some common point A, line 6 does not necessarily follow because it is technically possible to have a situation where both evolutions do in fact reach A infinitely many times, but never simultaneously. Admittedly though, it would require fine-tuning to ensure that two initially-distinct states never hit "nearly A" at the same time, which might be enough.

Zaq7y00

Wow. I've never run into a text using "we have" as assuming something's provability, rather than assuming its truth.

So the application of the deduction theorem is just plain wrong then? If what you actually get via Lob's theorem is ◻((◻C)->C) ->◻C, then the deduction theorem does not give the claimed ((◻C)->C)->C, but instead gives ◻((◻C)->C)->C, from which the next inference does not follow.

Zaq8y10

The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of all explanations of the phenomenon.

Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-splitter result" game in many-worlds terminology and compare that to a "guess the trillionth digit of pi" game in the same terminology.

Aside: Technically it's the amplitudes that split in many-worlds models, and somehow these amplitudes are multiplied by their complex conjugates to get you answers to questions about guessing games (no model has an explanation for that part). As is common around these parts, I'm going to ignore this and talk as if it's the probabilities themselves that split. I guess nobody likes writing "square root" all the time.

Set up a 50/50 beam-splitter. Put a detector in one path and block the other. Write your choice of "Detected" or "Not Detected" on a piece of paper. Now fire a single photon. In Everett-speak, half of the yous end up in branches where the photon's path matches your guess while half of the yous don't. The 50/50 nature of this split remains even if you know the exact quantum state of the photon beforehand. Furthermore, the branching of yous that try to use all your physics knowledge to predict their observations have no larger a proportion of success than the branching of yous that make their predictions by just flipping a coin, always guessing Fire, or employing literally any other strategy that generates valid guesses. The 50/50 value of this branching process is completely decoupled from your predictions, no matter what information you use to make those predictions.

Compare this to the process of guessing the trillionth digit of pi. If you make your guess by rolling a quantum die, then 1 out of 10 yous will end up in a branch where your guess matches the actual trillionth digit of pi. If you instead use those algorithms you know to calculate a guess, and you code/run them correctly, then basically all of the yous end up in a branch where your guess is correct.

We now see the fundamental difference. Changing your guessing strategy results in different correct/incorrect branching ratios for the "guess the trillionth digit of pi" game but not for the "guess the beam-splitter result" game. This is the Everett-speak version of saying that the beam-splitter's 50/50 odds is a property of the universe while the trillionth digit of pi's 1/10 odds is a function of our (current) ignorance. You can opt to replace "odds" with "branching ratios" and declare that there is no probability of any kind, but that just seems like semantics to me. In particular the example of the ten trillionth digit of pi should not be what prompts this decision. Even in the many-worlds model there's still a fundamental difference between that and the quantum processes that physicists cite as intrinsically random.

Zaq8y-10

I two-box.

Three days later, "Omega" appears in the sky and makes an announcement. "Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species' reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye."

Giant laser beam then obliterates earth. I die wishing I'd done more to warn the world of this highly-improbable threat.


TLDR: I don't buy this post's argument that I should become the type of agent that sees one-boxing on Newcomb-like problems as rational. It is trivial to construct any number of no-less plausible scenarios where a superintelligence descends from the heavens and puts a few thousand people through Newcomb's problem before suddenly annihilating those who one-box. The presented argument for becoming the type of agent that Omega predicts will one-box can be equally used to argue for becoming the type of agent that Alpha predicts will two-box. Why then should it sway me in either direction?

Zaq8y00

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day because the early universe was small. As the universe expands, its maximum entropy grows. Its realized entropy also grows, just not as fast as its maximal entropy.

Zaq8y10

"Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters."

This doesn't completely resolve my concern here, as there are infinitely many possible Turing machines. If you pick one and I'm free to pick any other, is there a bound on the length of the compiler? If not, then I don't see how the compiler length placing a bound on any specific change in Turing machine makes the problem of which machine to use irrelevant.

To be clear: I am aware that starting with different machines, the process of updating on shared observations will eventually lead us to similar distributions even if we started with wildly different priors. My concern is that if "wildly different" is unbounded then "eventually" might also be unbounded even for a fixed value of "similar." If this does indeed happen, then it's not clear to me how S I does anything more useful than "Pick your favorite normalized distribution without any 0s or 1s and then update via Bayes."

Edit: Also thanks for the intro. It's a lot more accessible than anything else I've encountered on the topic.

Zaq9y00

You've only moved the problem down one step.

Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

Take the me sitting here right now, with the memory of the specific half of the clicks he has right now. As far as we understand physics, he can't postdict which memory that should have been. Even in your model, he can postdict that there will be many branches of him with each possible memory, but he can't postdict which of those branches he'll be - only the probability of him being any one of the branches.

Zaq9y330

Did the survey, except digit ratio due to lack of precision measuring devices.

As for feedback, I had some trouble interpreting a few of the questions. There were some times when you defined terms like human biodiversity, and I agreed with some of the claims in the definition but not others, but since I had no real way to weight the claims by importance it was difficult for me to turn my conclusions into a single confidence measurement. I also had no idea weather the best-selling computer game question was supposed to account for inflation or general growth of the videogame market, nor whether we were measuring in terms of copies sold or revenue earned or something else entirely, nor whether console games or games that "sell" for 0$ counted. I ended up copping out by listing a game that is technically included in a bit of software I knew sold very well for its time (and not for free), but the software was not sold as a computer game.

Also, a weird thing happened with the calibration questions When I was very unsure which of a large number of possible answers was correct, and especially if I wasn't even sure how many possible answers there were, I found myself wanting to write an answer that was obviously impossible (like writing "Mars" for Obama's birth state) and putting a 0 for the calibration. I didn't actually do this, but it sure was tempting.

Zaq9y00

The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

Zaq10y-10

I don't think this is what's actually going on in the brains of most humans.

Suppose there were ten random people who each told you that gravity would be suddenly reversing soon, but each one predicted a different month. For simplicity, person 1 predicts the gravity reversal will come in 1 month, person 2 predicts it will come in 2 months, etc.

Now you wait a month, and there's no gravity reversal, so clearly person 1 is wrong. You wait another month, and clearly person 2 is wrong. Then person 3 is proved wrong, as is person 4 and then 5 and then 6 and 7 and 8 and 9. And so when you approach the 10-month mark, you probably aren't going to be expecting a gravity-reversal.

Now, do you not suspect the gravity-reversal at month ten simply because it's not as simple as saying "there will never a be a gravity reversal," or is your dismissal substantially motivated by the fact that the claim type-matches nine other claims that have already been disproven? I think that in practice most people end up adopting the latter approach.

Load More