What kind of proof would you accept?
What kind of proof would you accept?
One that addresses the conceptual issues. Consider epistemology. In order to resolve philsophical issue about epistemology, you need to range over various theories under consideration. You will also need to consider them from some viewpoint that allows you judge their truth. That is, your epistemology need to e both fixed and variable. Paradoxes and circualrities of this kind make it seem inevitable that philosophy will be highly problematic.
How will Pearl and Kahneman help? I can't see. It seems to me that proposals to simplify philosophy always involve sweeping these Hard Problems conveniently away--they conist, in a sense, of not doing philosophy, or not doing such difficult philosophy at least.
Please provide proof. Please don't point, yet again, to the highly debatable "solution" to FW.
Another example of schools proliferating without evidence: philosophy. Consider all the different schools of ethics which have sprung up: there's utilitarian ethics, deontological ethics, and virtue ethics, with vast numbers of sub categorizations under each school.
Philosophers are more susceptible to this failure mode because on many important philosophical questions, a standard if not unanimous approach is argue that the question cannot be answered by evidence. Modal logicians trying to do metaphysics, for example.
Is any of that avoidable?
All possible. However, if you can explain anything, the explanation counts for nothing. The question is which explanation is the most likely, and "there is evidence for fair-mindedness (but it is mostly fake!)" is more contrived than "there is evidence for fair-mindedness", as an explanation for the upvotes of OP.
Yeah. But there's also evidence of unfair-mindedness.
If you being downvoted is the result of LW ruthlessly suppressing dissent of all kind, how do you explain this post by Holden Karnofsky getting massively upvoted?
eg:
It's not being upvoted by regulars/believers. It's a magnet for dissidents, and transient visitors with negative perceptions of SI.
It's high-profile,so it needs to be upvoted to put on a show of fair-mindedness.
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:
Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).
Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there:
I have lost track of how many times I have heard people say, "Intelligence is an emergent phenomenon!" as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is "emergent"? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence" to other sciences merely mundane.
And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.
Further down in the paper, we have this:
They point to higher organizing principles in nature, e.g. the principle of continuous symmetry breaking, localization, protection, and self-organization, that are insensitive to and independent of the underlying microscopic laws and often solely determine the generic low-energy properties of stable states of matter (‘quantum protectorates’) and their associated emergent physical phenomena. “The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter” (Laughlin and Pines 2000).
Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
He specifically objects that reductionism isn't always the "most complete" description of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
This is the only coherent way I could possibly imagine consciousness being an "emergent phenomenon", or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?
At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks). And so I thought that EY was jumping the gun in this fight.
But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon. Even though you can't use this piece of information to make any predictions as to how reality is. Even though it doesn't constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for "magic"; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe's laws for the phenomenon.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don't think the laws of physics contain such a clause.
I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a "smart person", but he isn't smart enough to realize that calling the creation of humans from DNA an "emergent phenomenon" is literally equivalent to calling it a "magic phenomenon", in that it doesn't limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level.
Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).
There's two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.
And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.
I don't see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of "emergent"). For another, they are not calling on emergence itself as doing any explaining. "Emergence isn't explanatory" doesn't refute "emergence is true". For a third, I don't see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the "sides", then one can't say that one side is "absurd".
Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this.
Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.
He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
EY can't do that for MWI either. Maybe it isn't all about prediction.
A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
That's robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.
He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
Reudctionism is an approach that can succeed or fail. It isn't true apriori. If reductionism failed, would you say that we should not even contemplate non-reductionism? Isn't that a bit like eEinstein's stubborn opposition to QM?
He specifically objects that reductionism isn't always the "most complete" description
I suppose you mean that the reductionistic explanation isn't always the most complete explanation...well everything exists in a context.
of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.
There is no apriori guarantee that such an explanation will be complete.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness,
That isn't the emergentist claim at all.
then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level.
Why? Because you described them as "laws of physics"? An emergentist wouldn't. Your objections seem to assume that some kind of reductionism+determinism combination is true ITFP. That's just gainsaying the emergentist claim.
Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties. And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level properties.
his is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
Cross-level laws aren't "laws of physics". Emergentists may need to assume that microphysical laws have "elbow room", in order to avoid overdetermination, but that isn't obviously wrong or absurd.
At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs
As it happens, no-one does. That objections was made in the most upvoted response to his article.
such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually.
Can you predict qualia from brain-states?
I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks).
Mechanisms have to break down into their components because they are built up from components. And emergentists would insist that that does not generalise.
But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon.
Or as a hint about how to go about understanding them.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism,
That's not what E-ism says at all.
and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe?
That's an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.
That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
What's supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?
I read the whole paper by Laughlin and I was unimpressed.
The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn't a dispute where one side is "absurd".
You would need to believe that there are [statistically] significant between-group differences AND that they are [actually] significant AND that they should be relevant to policy or decision making in some way.
I'm with you on the first two, but if the trait is interesting enough to talk about (intelligence, competence, or whatever), isn't that enough for consideration in policy making? If it isn't worth considering in making policy, why are we talking about the trait?
Politics isn't a value-free reflection of nature. The disvalue of reflecting a fact politically might outweigh the value. For instance, people aren't the same in their political judgement, but everyone gets one vote, for instance.
*2a There are also higher-level properties.. *2b irreducible to and unpredictable from the lower level properties and laws...
This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like "when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave". Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality.
But you are right:
unpredictable from lower level properties
Which is what I said:
That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic [lowest] level of perspective.
Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
But they are still laws of physics,
Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states.
Such laws are still fundamental laws, on the lowest level of the universe.
In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no.
Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
"Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.
It seems to me that emergence is the opposite of rigorous structure. Take human brain function (similar to your intelligence comment in the article). Claiming that brain function is emergent versus rigorously ordered allows you to make a prediction, namely that a child who has a portion of their brain removed will retain all or a large portion of the functionality of the removed portion, or they will not. A child with half of their brain missing would be expected to be extraordinarily impaired. A simple search of the literature should prove it one way or another.
Thus, when one says that some property is emergent, it means that it is not limited by the macro form, but by the conditions effecting the micro components from which the property emerges. This should allow for all manner of predictive ability. Of course, there are plenty of people who latch on to the word, just like there are plenty of people who latch on to the word "evolution", and don't think or use it to make predictions, and in that, your point is well taken.
Sorry for commenting 5 years after the fact, but this place seems to have at least some ongoing discussion.
"Holistic" seems to label that phenomenon more clearly, for my money.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
In my limited experience, the "hard problems" in philosophy are the problems which are either poorly defined and so people keep arguing about definitions without admitting it, or poorly analyzed, so people keep mixing decision theory with cognitive science, for example. While the traditional philosophy is good at asking (meta-)questions and noticing broad similarities, it is nearly useless at solving them. When a philosopher tries to honestly analyze a deep question, it usually stops being philosophy and becomes logic, linguistics, decision theory, computer science, physics or something else that qualifies as science. Hence Pearl and Kahneman and Russell, some Wittgenstein, Popper...
I raised the epistemology example for a reason. Can you give an example of someone solving that problem, or a similar one? Can you argue that it is possible to solve all such foundational problems?