diegocaleiro comments on The Level Above Mine - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (387)
You are the brightest person I know. And I know Dan Dennett, Max Tegmark, Robert Trivers, Marcello, Minsky, Pinker and Omohundro.
Unfortunately, those are non-math geniuses, so that speaks for only some sub-areas of cognition which, less strictly categorizable than the clearly scalable domain of math, are not subject to your proposed rule of "one standard deviation above you they blurr"
"Know" in the sense EY used it != have read, watched interviews, etc.
I took it to mean more personal interaction (even if through comments online).
Especially since "know of" exists as a common phrase to cover the meaning "have read, watched interviews, etc."
I have had classes with them, asked questions. and met them personally. I should have anticipated disbelief. And yes, I didn't notice that I categorized Marcello as non-math, sorry Marcello!
Oh. Cool! Less disbelief, more illusion of transparency.
If a randomly selected person says, "I know X (academically) famous people." I myself usually assume through impersonal means.
Update'd. Carry on :D
Non-math geniuses who grok and advocate for unpopular reductionism are in one sense greater than mere superheroes who know the math.
In another sense, non-math geniuses advocating for reductionism are no better than the anti-vaccine lobby.
What sense is that?
The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism.
I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.
The "absurdity" of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:
Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there:
Further down in the paper, we have this:
Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
He specifically objects that reductionism isn't always the "most complete" description of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
This is the only coherent way I could possibly imagine consciousness being an "emergent phenomenon", or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?
At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks). And so I thought that EY was jumping the gun in this fight.
But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon. Even though you can't use this piece of information to make any predictions as to how reality is. Even though it doesn't constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for "magic"; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe's laws for the phenomenon.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don't think the laws of physics contain such a clause.
I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a "smart person", but he isn't smart enough to realize that calling the creation of humans from DNA an "emergent phenomenon" is literally equivalent to calling it a "magic phenomenon", in that it doesn't limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...
It's a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.
To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer's circuitry isn't going to provide much new information; and it will be a lot more complicated, and harder to understand. There's a conceptual point, there, at the level of individual software instructions, where further reductionism doesn't help to understand the phenomenon, and does make the analysis more complicated, and harder to work with.
A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.
Well, yes - but that arises from the fact that such devices are man-made, and (out of respect to our brains' limitations) designed to isolate the layers of explanation from one another - to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.
The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don't see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.
And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.
Such situations are rare, but not entirely unknown.
Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.
As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be.
And as a nod towards staying on topic:
Well, it will, and it won't.
If what I mostly care about is the computer's behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed.
OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information.
What counts as "information" in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.
My intuition says "very, very big". Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.
I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.
I don't think you understand Laughlin's point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can't understand their differences by by going to a lower level of description.
Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.
Why didn't he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).
Also, I'm pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can't make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that's what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don't know exactly what "individual behavior" could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.
Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you'll find that your measurements agree with this: 50% of the time you'll get the first measurement, and 50% of the time you'll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn't show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.
Besides, it's not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.
I don't know why Laughlin wrote what he did, you didn't link to the paper. However, he comes from a world where solid state physics is obvious, and "everyone knows" various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics "emergent" is a technical, defined concept.
Your explanation of superfluid helium isn't coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn't sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don't think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level.
There's two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.
I don't see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of "emergent"). For another, they are not calling on emergence itself as doing any explaining. "Emergence isn't explanatory" doesn't refute "emergence is true". For a third, I don't see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the "sides", then one can't say that one side is "absurd".
Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.
EY can't do that for MWI either. Maybe it isn't all about prediction.
That's robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.
Reudctionism is an approach that can succeed or fail. It isn't true apriori. If reductionism failed, would you say that we should not even contemplate non-reductionism? Isn't that a bit like eEinstein's stubborn opposition to QM?
I suppose you mean that the reductionistic explanation isn't always the most complete explanation...well everything exists in a context.
There is no apriori guarantee that such an explanation will be complete.
That isn't the emergentist claim at all.
Why? Because you described them as "laws of physics"? An emergentist wouldn't. Your objections seem to assume that some kind of reductionism+determinism combination is true ITFP. That's just gainsaying the emergentist claim.
If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties. And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level properties.
Cross-level laws aren't "laws of physics". Emergentists may need to assume that microphysical laws have "elbow room", in order to avoid overdetermination, but that isn't obviously wrong or absurd.
As it happens, no-one does. That objections was made in the most upvoted response to his article.
Can you predict qualia from brain-states?
Mechanisms have to break down into their components because they are built up from components. And emergentists would insist that that does not generalise.
Or as a hint about how to go about understanding them.
That's not what E-ism says at all.
That's an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.
What's supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?
The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn't a dispute where one side is "absurd".
Marcello is non-math?