Sometimes the electron meets itself coming the other way and together it turns into a photon. And sometimes that photon can't decide whether to go forwards in time or backwards in time, so it does both -- but it doesn't always do so as an electron/positron. So really there's only one particle, period. ;-)
But there's also the observed matter-antimatter asymmetry. Observations strongly indicate that right now we have a lot more electrons than positrons. If it was just one electron going back and forth in time (and occasionally being a photon), we'd expect at most one extra electron.
Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.
There is definitely more than one electron, if only because when you create an electron-positron pair, you can then annihilate those two with each other, and that doesn't form a loop with the others.
If the only thing keeping them the same were identity, then these virtual electrons could be different. And don't bring up 'borrowed' energy or mass -- going off-shell is just a dynamical feature like position or velocity.
There's also the observed matter-antimatter asymmetry. Even if you want to argue that virtual electrons aren't real and thus don't count, it still seems to be the case that there are a lot more electrons than positrons. If it was just one electron going back and forth in time, we'd expect at most one extra electron.
Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.
Eliezer, why no mention of the no-cloning theorem?
Also, some thoughts this has triggered:
Distinguishability can be shown to exist for some types of objects in just the same way that it can be shown to not exist for electrons. Flip two coins. If the coins are indistinguishable, then the HT state is the same as the TH state, and you only have three possible states. But if the coins are distinguishable, then HT is not TH, and there are four possible states. You can experimentally verify that the probability obeys the latter situation, and not the former. And of course, you can experimentally verify that electron pairs obeys the former situation, and not the latter. This is probably just because the coins are qualitatively distinct, while the electrons are not.
But it seems that if you did make a quantum copy (no-cloning theorem be damned!) then after a bit of interaction with the different environments, the two would become distinguishable (on the basis of developing different qualitative identities) and start behaving more like the coins than the electrons. In fact, if you're actually using the lightspeed limit then the reconstructed you would be several years younger, and immediately distinguishable from what the scanned you has since evolved into. At the time of reconstruction, the two are already acting like coins and not electrons. Does this break the argument? I'm not really sure, because the reconstructed you at the time of reconstruction would still be indistinguishable from the you at the time of scanning, if you could somehow get them both around at the same time.
Bonus! The reconstructed you could be seen to have a very qualitatively different time-evolution. The scanned you evolves throughout its entire history via a Hamiltonian which itself changes continuously as scanned-you moves continuously through your environment. Reconstructed you, however, has a clear discontinuity in its Hamiltonian at the time of reconstruction (the state is effectively instantly moved from one environment into a completely different environment). The state of the reconstructed you would still evolve continuously, it would just have a discontinuous derivative. So I'm not really sure if reconstructed you would fail to pass the bar of having a "continuity of identity" that a lot of people talk about when dealing with the concept of self. My gut says no, but I'm not sure why.
I am just choosing a branch , I am not destroying the branch in wich I choose not to sign up.
Actually... you are. The physical implementation of making the choice involves shifting weight from not-signed-up branches to signed-up branches (note, the 'not-signed-up-yet' branch is defined in a way that lets it leak amplitude). That implementation is contained within you, and it involves processes we describe as applying operators on that branch which reduce its amplitude. This totally counts as destroying the branch.
Okay, we need to be really careful about this.
If you sign up for cryonics at time T1, then the not-signed-up branch has lower amplitude after T1 than it had before T1. But this is very different from saying that the not-signed up branch has lower amplitude after T1 than it would have had after T1 if you had not signed up for cryonics at T1. In fact, the latter statement is necessarily false if physics really is timeless.
I think this latter point is what the other posters are driving at. It is true that if there is a branch at T1 where some yous go down a path where they sign up and others don't, then the amplitude for not-signed-up is lower after T1. But this happens even if this particular you doesn't go down the signed-up branch. What matters is that the branch point occurs, not which one any specific you takes.
In other words, amplitude is always being seeped from the not-signed-up branch, even if some particular you keeps not leaving that branch.
I've been arguing about this with a friend recently [well, a version of this - I don't have any problems with arbitrarily large number of people being created and killed, unless the manner of their death is unpleasant enough that the negative value I assign to it exceeds the positive value of life].
He says that he can believe the person we are talking to has Agent Smith powers, but thinks that the more the Agent Smith promises, the less likely it is to be true, and this decreases faster the more that is promised, so that the probability that Agent Smith has the powers to create and kill [in an unpleasant manner] Y people multiplied by Y tends to zero as Y tends to infinity. . So the net expectancy tends towards zero. I disagree with this: I believe that if you assign probability X to the claim that the person you are talking to is genuinely from outside the Matrix [and that you're in the Matrix], then the probability that Agent Smith has the powers to create and kill [in an unpleasant manner] Y people multiplied by Y tends to infinity as Y tends to infinity.
Now, I think we can break this down further to find the root cause of our disagreement [this doesn't feel like a fundamental belief]: does anyone have any suggestions for how to go about doing this? We began to argue about entropy and the chance for Agent Smith to have found a way [from outside the Matrix = all our physics doesn't apply to him] to reverse it, but I think we went downhill from there.
Edit: Looks like I was assuming probability distributions for which Lim (Y -> infinity) of Y*P(Y) is well defined. This turns out to be monotonic series or some similar class (thanks shinoteki).
I think it's still the case that a probability distribution that would lead to TraderJoe's claim of P(Y)*Y tending to infinity as Y grows would be un-normalizable. You can of course have a distribution for which this limit is undefined, but that's a different story.
Sleeping Beauty does not sleep well. She has three dreams before awakening. The Ghost of Mathematicians Past warns her that there are two models of probability, and that adherents to each have little that is good to say about adherents to the other. The Ghost of Mathematicians Present shows her volumes of papers and articles where both 1/2 and 1/3 are "proven" to be the correct answer based on intuitive arguments. The Ghost of Mathematicians Future doesn't speak, but shows her how reliance on intuition alone leads to misery. Only strict adherence to theory can provide an answer.
Illuminated by these spirits, once she is fully awake she reasons: "I have no idea whether today is Monday or Tuesday; but it seems that if I did know, I would have no problem answering the question. For example, if I knew it was Monday, my credence that the coin landed heads could only be 1/2. On the other hand, if I knew it was Tuesday, my credence would have to be 0. But on the gripping hand, these two incontrovertible truths can help me answer as my night visitors suggested. There is a theorem in probability, called the Theorem of Total Probability, that says the probability for event A is equal to the probability of the sum of the events (A intersect B(i)), where B(i) partitions the entire event space.
"Today has to be either Monday or Tuesday, and it can't be both, so these two days represent such a partition. Since I want to avoid making any assumptions as long as I can, let me say that the probability that today is Monday is X, and the probability that it is Tuesday is (1-X). Now I can use this Theorem to state, unequivocally, that my credence that the coin landed heads is P(heads)=(1/2)X+0(1-X)=X/2.
"But I know that it is possible that today is Tuesday; even a Bayesian has to admit that X<1. So I know that 1/2 cannot be correct; the answer has to be less than that. A Frequentist would say that X=2/3 because, if this experiment were repeated many times, two out of every three interviews would take place on Monday. And while a Bayesian could, in theory, choose any value that is less than 1, it is a violation of Occam's Razor to assume there is a factor present that would make X different than 2/3. So, it seems my answer must be 1/3.
You can have a credence of 1/2 for heads in the absence of which-day knowledge, but for consistency you will also need P(Heads | Monday) = 2/3 and P(Monday) = 3/4. Neither of these match frequentist notions unless you count each awakening after a Tails result as half a result (in which case they both match frequentist notions).
With individual differences, people are being judged as individuals, and on the basis of their individual capabilities.
With racial differences, people are being judged as members of a race, and not on the basis of their individual capabilities.
At least, that's the fear.
You can as a rough estimate of the complexity of a number take the amount of lines of the shortest program that would compute the number from basic operations. More formally, substitute lines of a program with states of a Turing Machine.
But what numbers are you allowed to start with on the computation? Why can't I say that, for example, 12,345,346,437,682,315,436 is one of the numbers I can do computation from (as a starting point), and thus it has extremely small complexity?
The problem is that the Solomonoff prior picks out 3^^^3 as much more likely than most of the numbers of the same magnitude because it has much lower Kolmogorov complexity.
I'm not familiar with Kolmogorov complexity, but isn't the aparent simplicity of 3^^^3 just an artifact of what notation we happen to have invented? I mean, "^^^" is not really a basic operation in arithmetic. We have a nice compact way of describing what steps are needed to get from a number we intuitively grok, 3, to 3^^^3, but I'm not sure it's safe to say that makes it simple in any significant way. For one thing, what would make 3 a simple number in the first place?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Took the survey. I definitely did have an IQ test when I was a kid, but I don't think anyone ever told me the results and if they did I sure don't remember it.
Also, as a scientist I counted my various research techniques as new methods that help make my beliefs more accurate, which means I put something like 2/day for trying them and 1/week for them working. In hindsight I'm guessing this interpretation is not what you meant, and that science in general might count as ONE method altogether.