Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lessdazed comments on The Level Above Mine - Less Wrong

44 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 08 August 2011 01:27:07PM 0 points [-]

Non-math geniuses who grok and advocate for unpopular reductionism are in one sense greater than mere superheroes who know the math.

Comment author: [deleted] 08 August 2011 02:33:44PM *  -1 points [-]

In another sense, non-math geniuses advocating for reductionism are no better than the anti-vaccine lobby.

Comment author: Luke_A_Somers 13 November 2011 09:21:16AM 4 points [-]

What sense is that?

Comment author: JohnWittle 17 March 2013 04:01:49PM 1 point [-]

The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism.

I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.

Comment author: whowhowho 17 March 2013 04:41:29PM 0 points [-]

The "absurdity" of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.

Comment author: JohnWittle 18 March 2013 09:00:45PM *  3 points [-]

I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".

Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.

Can you explain to me how it might work?

Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:

Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).

Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there:

I have lost track of how many times I have heard people say, "Intelligence is an emergent phenomenon!" as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is "emergent"? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence" to other sciences merely mundane.

And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

Further down in the paper, we have this:

They point to higher organizing principles in nature, e.g. the principle of continuous symmetry breaking, localization, protection, and self-organization, that are insensitive to and independent of the underlying microscopic laws and often solely determine the generic low-energy properties of stable states of matter (‘quantum protectorates’) and their associated emergent physical phenomena. “The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter” (Laughlin and Pines 2000).

Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.

He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.

He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.

He specifically objects that reductionism isn't always the "most complete" description of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.

I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.

This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.

This is the only coherent way I could possibly imagine consciousness being an "emergent phenomenon", or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?

At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks). And so I thought that EY was jumping the gun in this fight.

But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon. Even though you can't use this piece of information to make any predictions as to how reality is. Even though it doesn't constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for "magic"; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe's laws for the phenomenon.

He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?

I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don't think the laws of physics contain such a clause.

I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a "smart person", but he isn't smart enough to realize that calling the creation of humans from DNA an "emergent phenomenon" is literally equivalent to calling it a "magic phenomenon", in that it doesn't limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...

Comment author: CCC 19 March 2013 09:29:03AM 1 point [-]

It's a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.

To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer's circuitry isn't going to provide much new information; and it will be a lot more complicated, and harder to understand. There's a conceptual point, there, at the level of individual software instructions, where further reductionism doesn't help to understand the phenomenon, and does make the analysis more complicated, and harder to work with.

A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.

Comment author: Morendil 19 March 2013 09:37:33AM -2 points [-]

an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise

Well, yes - but that arises from the fact that such devices are man-made, and (out of respect to our brains' limitations) designed to isolate the layers of explanation from one another - to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.

Comment author: CCC 19 March 2013 10:26:05AM *  2 points [-]

The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don't see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.

And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.

Such situations are rare, but not entirely unknown.

Comment author: JohnWittle 19 March 2013 05:39:40PM 2 points [-]

I disagree with your entire premise. I think we should pin down this concept of "levels of perspective" with some good jargon at some point, but regardless...

You can look at a computer from the level of perspective of "there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows." This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.)

You might also see the perspective, "There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways." This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description.

Another level down, the description might be: "There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set ("assembly language"). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user". We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and therefore can be said to actually "understand" the system.

But still yet there are lower levels. Describing the actual logic gate organization in the CPU, the system used by RAM to store variables, how the magnetic needle accesses a specific bit on the hard drive by spinning it... All of these things must be known and understood in order to rebuild a computer from scratch.

Humans designed the computer at the level of "logic gates", "bits on a hard drive", "registries", etc, and so it is not necessary to go deeper than this to understand the entire system (just as you don't have to go deeper than "gears and cogs" to understand how a clock works, or how you don't have to go deeper than "classical physics (billiards balls bouncing into each other)" to understand how a brain works.

But I hope that it's clear that the mechanisms at the lower levels of a system completely contain within them the behavior of the higher levels of the system. There are no new behaviors which you can only learn about by studying the system from a higher level of perspective; those complicated upper-level behaviors are entirely formed by the simple lower-level mechanisms, all the way down to the wave function describing the entire universe.

That is what reductionism means. If you know the state of the entire wavefunction describing the universe, you know everything there is to know about the universe. You could use it to predict that, in some everette branches, the assassination of Franz Ferdinand on the third planet from the star Sol in the milky way galaxy would cause a large war on that planet. You could use it to predict the exact moment at which any particular "slice" of the wavefunction (representing a particular possible universe) will enter its maximum entropy state. You could use it to predict any possible behavior of anything and you will never be surprised. That is what it means to say that all of reality reduces down to the base-level physics. That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.

If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization (which would require, by the way, all of reality to keep track not only of mass and of velocity but also of its organizational structure relative to nearby reality), then I will accept such a demonstration as being a complete and utter refutation of reductionism. But there is no such behavior.

Comment author: EHeller 21 March 2013 06:14:01AM 3 points [-]

The argument will not apply to things not man-made.

Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.

Comment author: Morendil 21 March 2013 07:19:19AM 4 points [-]

Let me amend that: the argument will not necessarily apply to things not man-made. There is a categorical difference in this respect between man-made things and the rest, and my intent was to say: "if you're going to put up an argument against reductionism, don't use examples of man-made things".

Whereas we have good reasons to bar "leaky abstractions" from our designs, Nature labors under no such constraint. If it turns out that some particular process that happens in a superfluid helium can be understood only by referring to the quark level, we are not allowed to frown at Nature and say "oh, poor design; go home, you're drunk".

For instance, it turns out we can almost describe the universe in the Newtonian model with its relatively simple equations, a nice abstraction if it were non-leaky, but anomalies like the precession of Mercury turn up that require us to use General Relativity instead, and take it into account when building our GPS systems.

The word "futile" in this context strikes me as wishful thinking, projecting onto reality our parochial notion of how complicated a reductionistic account of the universe "should" be. Past experience tells us that small anomalies sometimes require the overthrow of entires swathes of science, in the name of reductionism: there keep turning up cases where science considers it necessary, not futile, to work things out in terms of the lower levels of description.

Comment author: TheOtherDave 19 March 2013 02:44:41PM *  3 points [-]

The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.

As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be.

And as a nod towards staying on topic:

a thorough electrical analysis of the computer's circuitry isn't going to provide much new information;

Well, it will, and it won't.

If what I mostly care about is the computer's behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed.

OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information.

What counts as "information" in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.

Comment author: CCC 20 March 2013 07:26:22AM *  1 point [-]

As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be.

My intuition says "very, very big". Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.

OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information.

I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.

Comment author: TheOtherDave 20 March 2013 03:32:35PM 0 points [-]

the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.

Sure, I agree that one permissible solution is a decoder which produces an organism capable of cloning itself. And while I'm willing to discard as violating the spirit of the thought experiment decoder designs which discard the human DNA in its entirety and create a predefined organism (in much the same sense that I would discard any text-translation algorithm that discarded the input text and printed out the Declaration of Independence as a legitimate translator of the input text), there's a large space of possibilities here.

Comment author: EHeller 21 March 2013 07:45:18AM *  -2 points [-]

I don't think you understand Laughlin's point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can't understand their differences by by going to a lower level of description.

Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.

Comment author: JohnWittle 21 March 2013 09:15:27AM *  1 point [-]

Why didn't he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).

Also, I'm pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can't make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that's what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don't know exactly what "individual behavior" could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.

Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you'll find that your measurements agree with this: 50% of the time you'll get the first measurement, and 50% of the time you'll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn't show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.

Besides, it's not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.

Comment author: EHeller 21 March 2013 06:35:09PM *  1 point [-]

I don't know why Laughlin wrote what he did, you didn't link to the paper. However, he comes from a world where solid state physics is obvious, and "everyone knows" various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics "emergent" is a technical, defined concept.

Your explanation of superfluid helium isn't coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn't sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.

I don't think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.

And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.

Comment author: EHeller 21 March 2013 07:32:35PM *  2 points [-]

In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.

Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I'm likely to interpret downvotes in a noisy, haphazard way.

Comment author: JohnWittle 22 March 2013 12:20:24AM *  2 points [-]

Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.

I'll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what's happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.

If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.

With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?

That's what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?

I don't think you understand what spontaneous symmetry breaking is

I probably don't. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for "Spontaneous Symmetry Breaking" is called "A pedagogical example: the Mexican hat potential", and so I cannot be entirely off.

In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you's (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn't broken.

Since you're a particle physicist and you disagree with this outlook, I'm sure there's something wrong with it, though.

In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter.

Is this similar to saying that when you are modeling how an airplane flies, you don't need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about "air pressure", and your model will still be accurate? I agree with you; modeling every single particle when you're trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.

But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.

Do you disagree?

Comment author: whowhowho 21 March 2013 09:50:31PM 0 points [-]

I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".

Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.

Can you explain to me how it might work?

One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level.

Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).

There's two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.

And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

I don't see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of "emergent"). For another, they are not calling on emergence itself as doing any explaining. "Emergence isn't explanatory" doesn't refute "emergence is true". For a third, I don't see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the "sides", then one can't say that one side is "absurd".

Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this.

Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.

He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.

EY can't do that for MWI either. Maybe it isn't all about prediction.

A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.

That's robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.

He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.

Reudctionism is an approach that can succeed or fail. It isn't true apriori. If reductionism failed, would you say that we should not even contemplate non-reductionism? Isn't that a bit like eEinstein's stubborn opposition to QM?

He specifically objects that reductionism isn't always the "most complete" description

I suppose you mean that the reductionistic explanation isn't always the most complete explanation...well everything exists in a context.

of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.

There is no apriori guarantee that such an explanation will be complete.

I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness,

That isn't the emergentist claim at all.

then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level.

Why? Because you described them as "laws of physics"? An emergentist wouldn't. Your objections seem to assume that some kind of reductionism+determinism combination is true ITFP. That's just gainsaying the emergentist claim.

Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.

If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties. And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level properties.

his is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.

Cross-level laws aren't "laws of physics". Emergentists may need to assume that microphysical laws have "elbow room", in order to avoid overdetermination, but that isn't obviously wrong or absurd.

At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs

As it happens, no-one does. That objections was made in the most upvoted response to his article.

such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually.

Can you predict qualia from brain-states?

I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks).

Mechanisms have to break down into their components because they are built up from components. And emergentists would insist that that does not generalise.

But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon.

Or as a hint about how to go about understanding them.

He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism,

That's not what E-ism says at all.

and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe?

That's an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.

That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?

What's supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?

I read the whole paper by Laughlin and I was unimpressed.

The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn't a dispute where one side is "absurd".