Comment author: shminux 21 March 2013 07:49:38PM 1 point [-]

My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that's only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.

Comment author: JohnWittle 22 March 2013 12:46:55AM 4 points [-]

When whowhowho posted a list of a couple names of people who don't like reductionism, I said to myself "if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong" etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, "solid state physicists don't believe in reductionism", "consciousness is too complex to be caused by the interactions between neurons", and "biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being". Since argument screens off authority and the latter two arguments are wrong, I kept my belief.

EHeller apparently has no argument with reductionism, except that it isn't a "good way to solve problems", which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn't mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?

The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there's no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like "consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons". They don't come out and say "top-down causality", which really is a synonym for magic, like EHeller did, but they do say "emergence".

When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.

Comment author: EHeller 21 March 2013 06:35:09PM *  1 point [-]

I don't know why Laughlin wrote what he did, you didn't link to the paper. However, he comes from a world where solid state physics is obvious, and "everyone knows" various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics "emergent" is a technical, defined concept.

Your explanation of superfluid helium isn't coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn't sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.

I don't think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.

And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.

Comment author: JohnWittle 22 March 2013 12:20:24AM *  2 points [-]

Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.

I'll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what's happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.

If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.

With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?

That's what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?

I don't think you understand what spontaneous symmetry breaking is

I probably don't. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for "Spontaneous Symmetry Breaking" is called "A pedagogical example: the Mexican hat potential", and so I cannot be entirely off.

In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you's (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn't broken.

Since you're a particle physicist and you disagree with this outlook, I'm sure there's something wrong with it, though.

In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter.

Is this similar to saying that when you are modeling how an airplane flies, you don't need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about "air pressure", and your model will still be accurate? I agree with you; modeling every single particle when you're trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.

But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.

Do you disagree?

Comment author: whowhowho 21 March 2013 09:18:28AM *  -1 points [-]

But they are still laws of physics,

Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states.

Such laws are still fundamental laws, on the lowest level of the universe.

In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no.

Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.

"Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.

Comment author: JohnWittle 21 March 2013 09:45:06AM 0 points [-]

Top-down causation maps macrophysical states to microphysical states

Can you name any examples of such a phenomenon?

"Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.

Oh, well in that case quantum physics throws determinism out the window for sure. I still think there's something to be said for correctly assigning subjective probabilities to your anticipations such that 100% of the time you think something will happen with a 50% chance, it happens half the time, i.e. you are correctly calibrated.

An unbounded agent in our universe would be able to achieve such absolutely correct calibration; that's all I meant to imply.

Comment author: EHeller 21 March 2013 07:45:18AM *  -2 points [-]

I don't think you understand Laughlin's point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can't understand their differences by by going to a lower level of description.

Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.

Comment author: JohnWittle 21 March 2013 09:15:27AM *  1 point [-]

Why didn't he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).

Also, I'm pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can't make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that's what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don't know exactly what "individual behavior" could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.

Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you'll find that your measurements agree with this: 50% of the time you'll get the first measurement, and 50% of the time you'll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn't show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.

Besides, it's not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.

Comment author: sparkles 21 March 2013 04:06:19AM *  3 points [-]

Comment author: JohnWittle 21 March 2013 06:25:38AM 4 points [-]

It sounds like you have some extremely strong Ugh Fields. It works like this:

A long, long time ago, you had an essay due on Monday and it was Friday. You had the thought, "Man, I gotta get that essay done", and it caused you a small amount of discomfort when you had the thought. That discomfort counted as negative feedback, as a punishment, to your brain, and so the neural circuitry which led to having the thought got a little weaker, and the next time you started to have the thought, your brain remembered the discomfort and flinched away from thinking about the essay instead.

As this condition reinforced itself, you thought less and less about the paper, and then eventually the deadline came and you didn't have it done. After it was already a day late, thinking about it really caused you discomfort, and the flinch got even stronger; without knowing it, you started psychologically conditioning yourself to avoid thinking about it.

This effect has probably been building in you for years. Luckily, there are some immediately useful things you can do to fight back.

Do you like a certain kind of candy? Do you enjoy tobacco snuff? You can use positive conditioning on your brain the same way you did before, except in the opposite direction. Put a bag of candy on your desk, or in your backpack. Every time you think about an assignment you need to do, or how you have some job applications to fill out, eat a piece of candy. As long as you get as much pleasure out of the candy as you get pain out of the thought of having to do work, the neural circuitry leading to the thought of doing work will get stronger, as your brain begins to think it is being rewarded for having the thought.

It doesn't take long at all before the nausea of actually doing work is entirely gone, and you're back to being just "lazy". But at this point, the thought of doing work will be much less painful, and the candy (or whatever) reward will be much stronger.

All you have to do is trick your brain into thinking it will get candy every time it thinks about doing work. Even if you know that it's just you rewarding yourself, it still works. Yeah, it's practically cheating, but your goal should be to do what works. Just trying really, really hard isn't just painful; it also doesn't work. Cheat instead.

Comment author: whowhowho 20 March 2013 01:00:37PM *  -1 points [-]

No it isn't?

Yes it is.

"A property of a system is said to be emergent if it is in some sense more than the "sum" of the properties of the system's parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to various sub-varieties of emergence." -- WP

I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.

Still deterinism, not reductionism. In a universe where

*1aTthere are lower-level-properties ..

*1b operating according to a set of deterministic laws.

*2a There are also higher-level properties..

*2b irreducible to and unpredictable from the lower level properties and laws...

*2c which follow their own deterministic laws.

You would be able to predict the future with complete accuracy, given both sets of laws and two sets of starting conditions. Yet the universe being described is explicitly non-reductionistic.

Comment author: JohnWittle 21 March 2013 06:04:53AM *  0 points [-]

*2a There are also higher-level properties.. *2b irreducible to and unpredictable from the lower level properties and laws...

This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like "when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave". Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality.

But you are right:

unpredictable from lower level properties

Which is what I said:

That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic [lowest] level of perspective.

Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.

Comment author: whowhowho 19 March 2013 06:52:02PM -1 points [-]

That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.

That's a fusion of reductionism and determinism. Reductionism ins't necessarily false in an indeterministic universe. What is more pertinent is being able to predict higher level properties and laws from lower level properties and laws. (synchronously, in the latter case).

Comment author: JohnWittle 20 March 2013 06:40:13AM 1 point [-]

No it isn't? I did not mean you would be able to make predictions which came true 100% of the time. I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.

Comment author: CCC 19 March 2013 10:26:05AM *  2 points [-]

The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don't see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.

And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.

Such situations are rare, but not entirely unknown.

In response to comment by CCC on The Level Above Mine
Comment author: JohnWittle 19 March 2013 05:39:40PM 2 points [-]

I disagree with your entire premise. I think we should pin down this concept of "levels of perspective" with some good jargon at some point, but regardless...

You can look at a computer from the level of perspective of "there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows." This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.)

You might also see the perspective, "There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways." This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description.

Another level down, the description might be: "There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set ("assembly language"). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user". We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and therefore can be said to actually "understand" the system.

But still yet there are lower levels. Describing the actual logic gate organization in the CPU, the system used by RAM to store variables, how the magnetic needle accesses a specific bit on the hard drive by spinning it... All of these things must be known and understood in order to rebuild a computer from scratch.

Humans designed the computer at the level of "logic gates", "bits on a hard drive", "registries", etc, and so it is not necessary to go deeper than this to understand the entire system (just as you don't have to go deeper than "gears and cogs" to understand how a clock works, or how you don't have to go deeper than "classical physics (billiards balls bouncing into each other)" to understand how a brain works.

But I hope that it's clear that the mechanisms at the lower levels of a system completely contain within them the behavior of the higher levels of the system. There are no new behaviors which you can only learn about by studying the system from a higher level of perspective; those complicated upper-level behaviors are entirely formed by the simple lower-level mechanisms, all the way down to the wave function describing the entire universe.

That is what reductionism means. If you know the state of the entire wavefunction describing the universe, you know everything there is to know about the universe. You could use it to predict that, in some everette branches, the assassination of Franz Ferdinand on the third planet from the star Sol in the milky way galaxy would cause a large war on that planet. You could use it to predict the exact moment at which any particular "slice" of the wavefunction (representing a particular possible universe) will enter its maximum entropy state. You could use it to predict any possible behavior of anything and you will never be surprised. That is what it means to say that all of reality reduces down to the base-level physics. That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.

If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization (which would require, by the way, all of reality to keep track not only of mass and of velocity but also of its organizational structure relative to nearby reality), then I will accept such a demonstration as being a complete and utter refutation of reductionism. But there is no such behavior.

Comment author: JohnWittle 19 March 2013 03:15:37AM 2 points [-]

An excellent example of a published paper against reductionism, using "emergence" in exactly this way such that it is indiscernible from "magic", is here:

http://philsci-archive.pitt.edu/3866/1/Tilburg_submission_fin.pdf

Comment author: whowhowho 17 March 2013 04:41:29PM 0 points [-]

The "absurdity" of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.

Comment author: JohnWittle 18 March 2013 09:00:45PM *  3 points [-]

I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".

Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.

Can you explain to me how it might work?

Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:

Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).

Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there:

I have lost track of how many times I have heard people say, "Intelligence is an emergent phenomenon!" as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is "emergent"? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence" to other sciences merely mundane.

And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

Further down in the paper, we have this:

They point to higher organizing principles in nature, e.g. the principle of continuous symmetry breaking, localization, protection, and self-organization, that are insensitive to and independent of the underlying microscopic laws and often solely determine the generic low-energy properties of stable states of matter (‘quantum protectorates’) and their associated emergent physical phenomena. “The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter” (Laughlin and Pines 2000).

Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.

He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.

He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.

He specifically objects that reductionism isn't always the "most complete" description of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.

I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.

This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.

This is the only coherent way I could possibly imagine consciousness being an "emergent phenomenon", or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?

At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks). And so I thought that EY was jumping the gun in this fight.

But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon. Even though you can't use this piece of information to make any predictions as to how reality is. Even though it doesn't constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for "magic"; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe's laws for the phenomenon.

He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?

I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don't think the laws of physics contain such a clause.

I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a "smart person", but he isn't smart enough to realize that calling the creation of humans from DNA an "emergent phenomenon" is literally equivalent to calling it a "magic phenomenon", in that it doesn't limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...

View more: Prev | Next