JohnWittle comments on The Level Above Mine - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (387)
I don't know why Laughlin wrote what he did, you didn't link to the paper. However, he comes from a world where solid state physics is obvious, and "everyone knows" various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics "emergent" is a technical, defined concept.
Your explanation of superfluid helium isn't coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn't sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don't think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
I'll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what's happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.
If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.
With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?
That's what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?
I probably don't. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for "Spontaneous Symmetry Breaking" is called "A pedagogical example: the Mexican hat potential", and so I cannot be entirely off.
In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you's (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn't broken.
Since you're a particle physicist and you disagree with this outlook, I'm sure there's something wrong with it, though.
Is this similar to saying that when you are modeling how an airplane flies, you don't need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about "air pressure", and your model will still be accurate? I agree with you; modeling every single particle when you're trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.
But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.
Do you disagree?
So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of 'emergence' as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.
A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.
In a sense this
seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.
Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like "examples of top down causality", "against reductionism", "nonreductionist explanation of"), and indeed I have a hard time finding anything relevant at all, but I can't find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.
The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.