Open Thread, Aug. 15. - Aug 21. 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (53)
Can someone sketch me the Many-Worlds version of what happens in the delayed choice quantum eraser experiment? Does a last-minute choice to preserve or erase the which-path information affect which "worlds" decohere "away from" the experimenter? If so, how does that go, in broad outline? If not, what?
The key to understand this kind of experiment under the MWI is to remember that if you erase (or never register) information specific to one of the branches of the super-position, then you can effectively merge two branches just as easily as you had split them.
The delayed quantum erasure is too complex for me to analyze now (it has a lot of branches), but the point is that when you make two branches interact, if those branches have not acquired information on their path, then they can be constructively re-merged.
Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from my understanding of the maths and my admittedly more shoddy understanding of the MWI. So take the following with a grain of salt, and I would welcome comments and corrections from better informed people! (Also, the names for the different detectors in the delayed choice explanation are taken from the wikipedia article)
In the normal double slit experiment, letting through one photon at a time, the slit through which the photon went cannot be determined, as the world-state when the photon has landed could have come from either trajectory (so it's still within the same Everett branch), and so both paths of the photon were able to interfere, affecting where it landed. As more photons are sent through, we see evidence of this through the interference pattern created. However, if we measure which slit the photon goes through, the world states when the photon lands are different for each slit the photon went through (in one branch, a measurement exists which says it went through slit A, and in the other, through slit B). Because the end world states are different, the two branch-versions of the photon did not interfere with each other. I think of it like this: starting at a world state at point A, and ending at a world state at point B, if multiple paths of a photon could have led from A to B, then the different paths could interfere with each other. In the case where the slit the photon went through is known, the different paths could not both lead to the same world state (B), and so existed in separate Everett branches, unable to interfere with each other.
Now, with the delayed choice: the key is to resist the temptation to take the state "signal photon has landed, but idler photon has yet to land" as point B in my above analogy. If you did, you'd see that the world state can be reached by the photon going through either slit, and so interference inside this single branch must have occurred. But time doesn't work that way, it turns out: the true final world states are those that take into account where the idler photon went. And so we see that in the world state where the idler photon landed in D1 or D2, this could have occurred whether the photon went through either slit, and so both on D0 (for those photons) and D1/D2, we end up seeing interference patterns, as we're still within a single branch, so to speak (when it comes to this limited interaction, that is). Whereas in the case where the idler photon reaches D3, that world state could not have been reached by the photon going through either slit, and so the trajectory of the photon did not interfere with any other trajectory (since the other trajectory led to a world state where the idler photon was detected at D4, so a separate branch).
So going back to my point A/B analogy, imagine three world states A, B and C as points on a page, and STRAIGHT lines represent different hypothetical paths a photon could take, you can see that if two paths lead from point A to point B, the lines would be on top of each other, meaning a single branch, and the paths would interfere. But if one of the paths led to point A and the other to point B, they would not be on top of each other, they go into different branches, and so the paths would not interfere.
Belated thanks to you and MrMind, these answers were very helpful.
Just a note, posting here because some people might have participated in something similar (if so, what were your impressions?):
...Unfortunately, at the all-Soviet [mathematics] Olympiads they failed to implement another idea, which A. N. Kolmogorov put forward more than once: to begin one of the days of the contest with a lecture on a previously unfamiliar, to the participants, topic and then offer several problems which would build on the ones considered during the lecture. (Such an experiment was successfully carried out only once […] in 1985 […] the lecture was on geometric probabilities.)
N.Vasilyev, A.A.Yegorov. Problems of All-Soviet mathematical olympiads. - Moscow, 1988. - p. 14 of 286.
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
Edit Updated somewhat based on conversation with James Miller here
Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?
http://www.schalklab.org/
I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.
If nomads united into large hordes to go to war, shouldn't the change in the number of men living together have had some noticeable psychological effect on the warriors? I mean, the Wikipedia says that "a Dunbar's number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships", and surely they had to co-work with lots more people than during peace?
You don't "maintain stable social relationships" with the whole horde, you maintain them with your small unit. And the military hierarchy exists so that people have to coordinate only a limited number of entities: if you're a grunt, you need to coordinate only with people immediately around you; if you're a mid-level officer you coordinate a limited number of platoons, if you're a general you coordinate a limited number of regiments.
A soldier does not meaningfully "co-work" with the whole army.
Not necessarily, and after all military hierarchies are a way to cope with the complexity of managing thousands of peopla at a time.
Yes, but how? Are there different DN for peace and war?
No I don't think, but still Dunbar number are not an exact quantity, first, and second: if you only need to relate to a handful of comrade in your platoon and one higher ranking official, then you can effectively restrict the number of people with whom you have to interact.
I am asking mostly because I have trouble imagining strict segregation in, say, Mongolian hordes; and intuitively, advance (where you have an army of able-bodied men) should be different from retreat (where you have also women, children and infirm men).
Elon Musk almost terminated our simulation.
Simulation is a simulation only if everybody is convinced that they are living real life. Bostrom proved that we are most likely live in a simulation, but not much people know about it. Elon Musk tweeted that we live with probability 1000000 to 1 in simulation. Now everybody knows. I think that it was 1 per cent chance that our simulation will terminate after it. It has not happen this time, but there may be some other threshold after which it will be terminated, like finding more proves that we are in a simulation or creation of an AI.
Is this a widely held belief?
On the other hand, the more "actions that would get the simulation terminated" we do and survive them, the higher the chance that we are actually not living in a simulation.
Unfortunately anthropic bias prevents us from meaningful probability calculations in this case.
It seems possible to me that after passing some threshold of metaphysics insight, beings in basement reality would come to believe that basement reality simulations have high measure.
Past a certain point, maybe original basement reality beings actually believe they are simulated. Then accurately simulated basement reality beings would mean simulating beings who think (correctly) that they are in a simulation.
I don't know how to balance such possibilities to figure out what's likely.
One idea how to measure the measure of simulations I had is that it is proportional of the energy of calculation. That is because the large computer could be "sliced" into two parallel if we could make slices in 4 dimensions. We could do such slices until we reach Plank level. So any simulation is equal to finite number of Plank simulation.
Base reality level computers will use more energy of calculations and any sub-level will use only part of this energy, so we have smaller measure for lower level simulations.
But it is just preliminary idea, as we need to coordinate it with probability of branches in MWI and also find the ways to prove it.
Interesting idea. So I guess that approach is focused on measure across universes with physics similar to ours? I wonder what fraction of simulations have physics similar to one level up. Presumably ancestor simulations would.
Actually, Bostrom did not argue that we are most likely living in a simulation. Instead, he argued that at least one of three propositions must be true, and one of those propositions is that "almost all people with our sorts of experiences live in computer simulations". But, since it is possible that one of the other two propositions could be true, it does not necessarily follow that we most likely live in a simulation.
In fact, Bostrom has stated that he believes that we are probably not simulated (see the second paragraph of this paper).
Of course, per your comments, it is possible that Bostrom only said that we are probably not simulated so as not to terminate the simulation :).
In fact, first two propositions in Bostrom's article are very improbable, especially if we include in the consideration all other civilizations.
1) "All possible civilizations will never create simulations" - seems to be very implausible, we are already good in creating movies, dreams and games.
2) "All possible civilizations will go extinct before they create simulations - it also seems implausible.
We only need to be concerned about actual civilizations, not all possible civilizations. We don't know how many actual civilizations there are, so there could be very few (we know of only one). We also don't know how difficult creating a sufficiently realistic simulation will be - obviously the more difficult it is to achieve such a simulation the more likely it is that civilizations will tend to go extinct before they create simulations. Finally, Bostrom's propositions use "almost all" rather than "all", e.g. "almost all civilizations at our current level of development go extinct before reaching technological maturity".
In light of these considerations, Bostrom's first two propositions do not seem implausible to me.
It all depends of our understanding of actuality. If modal realism is true, there is no difference between actual and possible. If our universe is really very large because of MWI, inflation and other universes, there should be many civilizations. But it all require some difficult philosophical questions, so it is better to use simple model with cut-of. (I think that model realism is true and all possible is actualy exist somewhere in the universe, if actuality is not depending of human consciousness, but it is long story for another post, so I will not concentrate on proving it here)
Imagine that in our Galaxy (or any other sufficiently large part of the Universe) exists 1000 our-tech level civilizations. If 990 of them go extinct in x-risks, 9 decide not to create simulations and 1 decided to model all civilizations in the galaxy 100 000 000 times in order to solve Fermi paradox numerically.
That is why I didn't use the word "almost". Because in this example almost all go extinct, and almost all will not make simulations, but it doesn't prevent one civilization to create so many simulations that overweight it.
The only condition in which we are not in simulation is that ALL possible civilization will not make them.
In with case we have 100 001 000 total number of the civilizations.
In this example we see that even if most civilization will go extinct, and most of survived civilizations will decide not to run simulation, 1 will do it based on its practical needs, and proportion of real to simulated will be 1 to 100 000.
It means that we are with a chance of 100 000 is in simulated civilization.
This example is also predicts the future of our simulation: it will simulate extinction event with 99 per cent probability, it will simulate simulation-less civilization with 0.9 probability and it will result in two level "matryoshka simulation" with 0.1 per cent simulation.
It also demonstrate that Bostrom's preposition is not alternative: all 3 conditions are true in this case (And Bostrom said that "at least one of three condition is true"). We will go extinct, we will not run simulations, we are in simulation.
With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.
In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.
What you say is like quantum immortality for many simulation world. Lets name it "simulation immortality" as there is nothing quantum about it. I think that it may be true. But it requires two conditions: many simulations and identity problem solution (is a copy of me in remote part of the universe is me.) I wrote about it here: http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/
Simulation immortality precisely neutralise risks of the simulation been shut down. But if we agree with quantum immortality logic, it works even broader, preventing any other x-risk, because in any case one person (observer in his timeline) will survive.
In case of simulation shutdown it works nicely if it will be shutdown instantaneous and uniformly. But if servers will be shut one by one, we will see how stars will disappear, and for some period of time we will find ourselves in strange and unpleasant world. Shut down may take only a millisecond in base universe, but it could take long time in simulated, may be days.
Slow shutdown is especially unpleasant scenario for two reasons connected with "simulation immortality".
Because of simulation immortality, its chances are rising dramatically: If for example 1000 simulations is shutting down and one of them is shutting slowly, I (my copy) will find my self only in the one that is in slow shut down.
If I find my self in a slow shutdown, there is infinitely many the same simulations, which are also experience slow shutdown. In means that my slow shutdown will never end from my observer point of view. Most probably after all this adventure I will find my self in the simulation which shutdown was stopped or reversed, or stuck somewhere in the middle.
TL;DR: So shut down of the simulation may be observed and may be unpleasant and it is especially likely if there are infinitely many simulations. It will look like very strange global catastrophe from the observer point of view. I also wrote a lot of such things in my simulation map, but idea of slow shut down I got only now
http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/
I have been asked about something like a carrier advise in field of x-risks prevention deep in EA forum so I will repost my answer here and would like to get any comments or more suggestions to the list
Q: "... it seems like it would be helpful to accompany some maps with a scheme for prioritizing the important areas. e.g. if people could know that safe ai engineering is a useful area for reducing gcrs.." http://effective-altruism.com/ea/10h/the_map_of_global_warming_prevention/#comments
A: So, some ideas for further research, that is fields which a person could undertake if he want to make an impact in the field of x-risks. So it is carrier advises. For many of them I don't have special background or needed personal qualities.
1.Legal research of international law, including work with UN and governments. Goal: prepare an international law and a panel for x-risks prevention. (Legal education is needed)
2.Convert all information about x-risks (including my maps) in large wikipedia style database. Some master of communication to attract many contributors and balance their actions is needed.
3.Create computer model of all global risks, which will be able to calculate their probabilities depending of different assumptions. Evolve this model into world model with elements of AI and connect it to monitoring and control systems.
4.Large research is safety of bio-risks, which will attract professional biologists.
5.Promoter, who could attract funding for different research without oversimplification of risks and overhyping solutions. He may be also a political activist.
I think that in AI safety we are already have many people, so some work to integrate their results is needed.
Teacher. A professor who will be able to teach a course in x-risks research for student and prepare many new researchers. May be youtube lectures.
Artist, who will be able to attract attention to the topic without sensationalism and bad memes.
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
Edit] Updated somewhat based on conversation with James Miller [here