Wow. I've never run into a text using "we have" as assuming something's provability, rather than assuming its truth.
So the application of the deduction theorem is just plain wrong then? If what you actually get via Lob's theorem is ◻((◻C)->C) ->◻C, then the deduction theorem does not give the claimed ((◻C)->C)->C, but instead gives ◻((◻C)->C)->C, from which the next inference does not follow.
The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of all explanations of the phenomenon.
Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-spli...
I two-box.
Three days later, "Omega" appears in the sky and makes an announcement. "Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species' reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye."
Giant laser beam then obliterates earth. I die wishing I'd done more ...
"Why did the universe seem to start from a condition of low entropy?"
I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.
Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day...
"Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters."
This doesn't completely resolve my concern here, as there are infinitely many possible Turing machines. If you pick one and I'm free to pick any other, is there a bound on the length of the compiler? If not, then I don't see how the compiler length placing a bound on any spe...
You've only moved the problem down one step.
Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his mem...
Did the survey, except digit ratio due to lack of precision measuring devices.
As for feedback, I had some trouble interpreting a few of the questions. There were some times when you defined terms like human biodiversity, and I agreed with some of the claims in the definition but not others, but since I had no real way to weight the claims by importance it was difficult for me to turn my conclusions into a single confidence measurement. I also had no idea weather the best-selling computer game question was supposed to account for inflation or general grow...
The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.
I don't think this is what's actually going on in the brains of most humans.
Suppose there were ten random people who each told you that gravity would be suddenly reversing soon, but each one predicted a different month. For simplicity, person 1 predicts the gravity reversal will come in 1 month, person 2 predicts it will come in 2 months, etc.
Now you wait a month, and there's no gravity reversal, so clearly person 1 is wrong. You wait another month, and clearly person 2 is wrong. Then person 3 is proved wrong, as is person 4 and then 5 and then 6 and 7 ...
Suppose my decision algorithm for the "both boxes are transparent" case is to take only box B if and only if it is empty, and to take both boxes if and only if box B has a million dollars in it. How does Omega respond? No matter how it handles box B, it's implied prediction will be wrong.
Perhaps just as slippery, what if my algorithm is to take only box B if and only if it contains a million dollars, and to take both boxes if and only if box B is empty? In this case, anything Omega predicts will be accurate, so what prediction does it make?
Com...
The problem isn't objectification of women, it's a lack of non-objectified female characters.
Men are objectified a lot in media. As a simple example, the overwhelming majority of mooks are male, and these characters exist solely to be mowed down so the audience can see how awesome the hero(ine) is (or sometimes how dangerous the villain is). They are hapless, often unthinking and with basically no backstory to speak of. Most of the time they aren't even given names. So why doesn't this common male objectification bring outrage?
I think the reason is tha...
Took the survey. I definitely did have an IQ test when I was a kid, but I don't think anyone ever told me the results and if they did I sure don't remember it.
Also, as a scientist I counted my various research techniques as new methods that help make my beliefs more accurate, which means I put something like 2/day for trying them and 1/week for them working. In hindsight I'm guessing this interpretation is not what you meant, and that science in general might count as ONE method altogether.
But there's also the observed matter-antimatter asymmetry. Observations strongly indicate that right now we have a lot more electrons than positrons. If it was just one electron going back and forth in time (and occasionally being a photon), we'd expect at most one extra electron.
Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.
There's also the observed matter-antimatter asymmetry. Even if you want to argue that virtual electrons aren't real and thus don't count, it still seems to be the case that there are a lot more electrons than positrons. If it was just one electron going back and forth in time, we'd expect at most one extra electron.
Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.
Eliezer, why no mention of the no-cloning theorem?
Also, some thoughts this has triggered:
Distinguishability can be shown to exist for some types of objects in just the same way that it can be shown to not exist for electrons. Flip two coins. If the coins are indistinguishable, then the HT state is the same as the TH state, and you only have three possible states. But if the coins are distinguishable, then HT is not TH, and there are four possible states. You can experimentally verify that the probability obeys the latter situation, and not the former. ...
Okay, we need to be really careful about this.
If you sign up for cryonics at time T1, then the not-signed-up branch has lower amplitude after T1 than it had before T1. But this is very different from saying that the not-signed up branch has lower amplitude after T1 than it would have had after T1 if you had not signed up for cryonics at T1. In fact, the latter statement is necessarily false if physics really is timeless.
I think this latter point is what the other posters are driving at. It is true that if there is a branch at T1 where some yous go down ...
Edit: Looks like I was assuming probability distributions for which Lim (Y -> infinity) of Y*P(Y) is well defined. This turns out to be monotonic series or some similar class (thanks shinoteki).
I think it's still the case that a probability distribution that would lead to TraderJoe's claim of P(Y)*Y tending to infinity as Y grows would be un-normalizable. You can of course have a distribution for which this limit is undefined, but that's a different story.
Counterexample: P(3^^^...3)(n "^"s) = 1/2^n P(anything else) = 0 This is normalized because the sum of a geometric series with decreasing terms is finite. You might have been thinking of the fact that if a probability distribution on the integers is monotone decreasing (i.e. if P(n)>P(m) then n <m) then P(n) must decrease faster than 1/n. However, a complexity-based distribution will not be monotone because some big numbers are simple while most of them are complex.
You can have a credence of 1/2 for heads in the absence of which-day knowledge, but for consistency you will also need P(Heads | Monday) = 2/3 and P(Monday) = 3/4. Neither of these match frequentist notions unless you count each awakening after a Tails result as half a result (in which case they both match frequentist notions).
I'm not familiar with Kolmogorov complexity, but isn't the aparent simplicity of 3^^^3 just an artifact of what notation we happen to have invented? I mean, "^^^" is not really a basic operation in arithmetic. We have a nice compact way of describing what steps are needed to get from a number we intuitively grok, 3, to 3^^^3, but I'm not sure it's safe to say that makes it simple in any significant way. For one thing, what would make 3 a simple number in the first place?
Just thought of something:
How sure are we that P(there are N people) is not at least as small as 1/N for sufficiently large N, even without a leverage penalty? The OP seems to be arguing that the complexity penalty on the prior is insufficient to generate this low probability, since it doesn't take much additional complexity to generate scenarios with arbitrarily more people. Yet it seems to me that after some sufficiently large number, P(there are N people) must drop faster than 1/N. This is because our prior must be normalized. That is:
Sum(all non-ne...
Just gonna jot down some thoughts here. First a layout of the problem.
Uh... what?
Sqrt(a few billion + n) is approximately Sqrt(a few billion). Increasing functions with diminishing returns don't approach Linearity at large values, their growth becomes really Small (way sub-linear, or nearly constant) at high values.
This may be an accurate description of what's going on (if, say, our value for re-watching movies falls off slower than our value for saving multiple lives), but it does not at all strike me as an argument for treating lives as linear. In fact, it strikes me as an argument for treating life-saving as More sub-linear than movie-watching.
Food for thought:
This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person's existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person's existence? If not, doesn't that give us a moral obligation to create and maintain all the simulations we can, rather than avoiding their creation? The more I think about this post, the more it seems that the optimum response is to simulate as
This is silly. To say that there is some probability in the universe is not to say that everything has randomness to it. People arguing that there is intrinsic probability in physics don't argue that this intrinsic probability finds its way into the trillionth digit of pi.
Many Physicists: If I fire a single electron at two slits, with a detector placed immediately after one of the slits, then I detect the electron half the time. Furthermore, leading physics indicates that no ammount of information will ever allow me to accurately predict which trials wi...
Constraint: Within the next two seconds, you must perform only the tasks listed, which you must perform in the specified order. Task 1. Exchange your definition of decrease with your definition of increase Task 2. --insert wish here-- Task 3. Self-terminate
This is of course assuming that the I don't particularly care for the genie's life.
Uh... what?
c is the speed of light. It's an observable. If I change c, I've made an observable change in the universe --> universe no longer looks the same?
Or are you saying that we'll change t and c both, but the measured speed of light will become some function of c and t that works out to remain the same? As in, c is no longer the measured speed of light (in a vacuum)? Then can't I just identify the difference between this universe and the t -> 2t universe by seeing whether or not c is the speed of light?
I also think you're stuck on restrictin...
A coupleof things:
"Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time? You couldn't measure it with any clock, because the ticking of the clock would slow down too."
This one doesn't make as much sense to me. T...
The even/odd attribute of a collection of marbles is not an emergent phenomenon. This is because as I gradually (one by one) remove marbles from the collection, the collection has a meaningful even/odd attribute all the way down, no matter how few marbles remain. If an attribute remains meaningful at all scales, then that attribute is not emergent.
If the accuracy of fluid mechanics was nearly 100% for 500+ water molecules and then suddenly dropped to something like 10% at 499 water molecules, then I would not count fluid mechanics as an emergent phenomenon. I guess I would word this as "no jump discontinuities in the accuracy vs scale graph."
I see three distinct issues with the argument you present.
First is line 1 of your reasoning. A finite universe does not entail a finite configuration space. I think the cleanest way to see this is through superposition. If |A> and |B> are two orthogonal states in the configuration space, then so are all states of the form a|A> + b|B>, where a and b are complex numbers with |a|^2 + |b|^2 = 1. There are infinitely many such numbers we can use, so even from just two orthogonal states we can build an infinite configuration space. That said, there's... (read more)