I just wanted to say that this is the best damn blog I've read. The high level of regular, insightful, quality updates is stunning. Reading this blog, I feel like I've not just accumulated knowledge, but processes I can apply to continue to refine my understanding of how I think and how I accumulate further knowledge.
I am honestly surprised, with all the work the contributors do in another realms, that you are able to maintain this high level of quality output on a blog.
Recently I have been continuing my self-education in ontology and epistemology. Some sources are more rigorous than others. Reading Rand, for example, shows an author who seems to utilize "phlogiston" like mechanics to describe her ethical solutions to moral problems. Explanations that use convincing, but unbounded turns of phrase instead of a meaningful process of explanation. It can be very challenging to read and process new data and also maintain a lack of bias (or at least an awareness of bias, that can be accounted for and challenged). It requires a very high level of active, conscious information processing. Rereading, working exercises, and thinking through what a person is saying and why they are saying it. This blog has provided me lots of new tools to improve my methods of critical thinking.
Rock on.
I feel like I've not just accumulated knowledge, but processes I can apply to continue to refine my understanding of how I think and how I accumulate further knowledge.
You've warmed my heart for the day.
Great post and I agree with Brandon. Eliezer, I recommend you admin a message board (I've been recommending an overcomingbias message board for a while) but I think in particular you'd thrive in that environment due to your high posting volume and multiple threads of daily interest. I think you're a bit constrained intellectually, pedagogically, and speculatively by this format.
I think I've said this before, but there is some defense that can be made for the phlogiston theorists. Phlogiston is like an absence of oxygen in modern combustion theory. The falsifiable prediction that caused phlogiston to be abandoned was that phlogiston would have mass, whereas an absence of oxygen (what it was in reality) does not.
Could evolution be a fake explanation in that it doesn’t predict anything? I’m no creationist but what your explaining in regards to phlogiston seems to have a lot of similarity to evolution. Seems to me like no matter what the data is you can put the tag of evolution on it. Now I’m no expert on evolution so don’t flame me. Just a question on how evolution is different.
Since the Theory of Evolution is in the business of explaining the past and present rather than predicting the future, it certainly runs the risk of deluding itself. But running a risk is not the same thing as failing. And whenever individual biologists succumb to hindsight bias, there are other biologists ready to point out their mistakes.
Evolutionary biology is a remarkably introspective discipline with plenty of remora-like philosopher-commensals waiting to devour any sloppy thinking that gets generated. See, for example, the wikipedia article on "The Spandrels of Sam Marco" or on G. C. Williams's book "Adaptation and Natural Selection". Or, check the high level of scorn in which practitioners of unfalsifiable "Evolutionary Psychology" are held by other evolutionary scientists.
Evolutionary biology is (in part) a historical science in which hypotheses about the past are used to explain features of the present. So too is geology and much of astro-physics. This class of scientific disciplines certainly seems to run afoul of the deprecation which Eliezer dispenses in the final paragraph of his posting. But, no problem. Other philosophers are...
I think the key is that theories don't predict the future at all.
They predict observations.
Because of my model, I expect to see X under the given conditions. If I test for X, and I do not find it, this is evidence that my model is wrong. If I test for X and find it, this is evidence that my model is correct.
This says nothing about the future or past, only what you have and have not observed yet, and what you expect to observe next (which can be about the future or past, it doesn't matter).
What TGGP said. Also, would an AI really be better at determining the falsifiability of a theory? It seems to me that, given a particular theory, an algorithm for determining the set of testable predictions thereof isn't going to be easy to optimize. How does the AI prove that one algorithm is better than another? Test it against a set of random theories?
Phlogiston is not necessarily a bad thing. Concepts are utilized in reasoning to reduce and structure search space. Concepts can be placed in correspondence with multitude of contexts, selecting a branch with required properties, which correlate with its usage. In this case active 'phlogiston' concept correlates with presence of fire. Unifying all processes that exhibit fire under this tag can help in development of induction contexts. Process of this refinement includes examination of protocols which include 'phlogiston' concept. It's just not a causal model, which can rigorously predict nontrivial results through deduction.
Eliezer, we need more posts from you elucidating the importance of optimizing science, etc., as opposed to the current, functional elements of it. In my opinion people are wasting significant comment time responding to each of your posts by saying "hey, such-and-such that you criticized actually has some functionality".
An analogous principle operates in rigorous probabilistic reasoning about causality. ... We count each piece of evidence exactly once; no update message ever "bounces" back and forth. The exact algorithm may be found in Judea Pearl's classic "Probabilistic Reasoning ...
Actually, Pearl's algorithm only works for a tree of cause/effects. For non-trees it is provably hard, and it remains an open question how best to update. I actually need a good non-tree method without predictable errors for combinatorial market scoring rules.
In response to Hopefully Anonymous, I think there is a real difference between unfalsifiable pseudosciences and genuine scientific theories (both correct and incorrect). Coming up with methods to distinguish the two will be helpful for us in doing science. It is easy in hindsight to say how obviously wrong something is, it is another to understand why it is wrong and whether its wrongness could have been detected then with the information available as this could assist us later when we do not have all the information we would wish to.
Robin: Yes indeed. If you can find a cutset for the tree, or cluster a manageable set of variables, all is well and good. I suspect this is what happens with most real-life causal models.
But in general, finding a good non-tree method is not just NP-hard but AI-complete. It is the problem of modeling reality itself.
Robin Hanson said: "Actually, Pearl's algorithm only works for a tree of cause/effects. For non-trees it is provably hard, and it remains an open question how best to update. I actually need a good non-tree method without predictable errors for combinatorial market scoring rules."
To be even more precise, Pearl's belief propagation algorithm works for the so-called 'poly-tree graphs,' which are directed acyclic graphs without undirected cycles (e.g., cycles which show up if you drop directionality). The state of the art for exact inference in bay...
Very interesting. In computer networking, we deal with this same information problem, and the solution (not sending the information from the forward node back to the forward node) is referred to as Split Horizon.
Suppose that Node A can reach Network 1 directly - in one hop. So he tells his neighbor, Node B, "I can get to Network 1 in one hop!". Node B records "okay, I can get there in two hops then." The worry is that when Node A loses his connection to Network 1, he asks Node B how to get there, and Node B says "don't worry, I...
Thanks for the link Davis but it does not address the issue that is brought up in the original post. The examples given in your link were "retrodictions". To quote the original post...
“Thanks to hindsight bias, it's also not enough to check how well your theory "predicts" facts you already know. You've got to predict for tomorrow, not yesterday. It's the only way a messy human mind can be guaranteed of sending a pure forward message.”
I’m not arguing that evolution is pseudoscience. I’m just saying that evolution as an explanation could makes us think we understand more than we really do. Again I am no creationist, the data clearly does not fit the creationist explanation.
@C of A:
Prediction doesn't have to mean literally predicting future events; it can mean predicting what more we will discover about the past.
E by NS holds that there is one tree of life (at least for complex organisms), just like a family tree. That is a prediction. It means that we won't find a human in the same fossil stratum and dating to the same time period as a fishlike creature that's supposed to be our great-to-the-nth-power grammy. So that's a prediction about our future discoveries, one that has been borne out. That's one example from a non-expert.
Is phlogiston theory so much worse than dark matter? Both are place-holders for our ignorance, but neither are completely mysterious, nor do they prevent further questions or investigation into their true nature. If people had an excellent phenomenological understanding of oxygen, but called it phlogiston and didn't know about atoms or molecules, I wouldn't discount that. Similarly, it can be very useful to use partial, vague and not-completely-satisfactory models, like dark matter.
I'm not sure this that this is fair to phlogiston or the scientists who worked with it. In fact, phlogiston theory made predictions. For example, substances in general were supposed to lose mass when combusting. The fact that metals gain mass when rusting was a data point that was actively against what was predicted by phlogiston theory. The theory also helped scientists see general patterns. So even if the term had been a placeholder, it allowed scientists to see that combustion, rust and metabolism were all ultimately linked procedures. The very fact that these shared patterns and that phlogiston predicted changes in mass (and that it failed to predict the behavior of air rich in oxygen and carbon dioxide (although those terms were not used until late)) in the wrong direction helped lead to the rejection of phlogiston theory. There are classical examples of theories with zero or close to zero predictive value. But phlogiston is not one of them.
Edit: Said lost mass when meant gain mass. Metals gain mass when rusting they don't lose mass. Phlogiston said they should lose mass but they actually gain mass. Also, fixed some grammar issues.
So.. How precisely would I go about doing this? I mean, let's say I really thought that phlogiston was the reason fire was hot and bright when it burns. Something that today, we know to be untrue. But if I really thought it was true, and I decided to test my hypothesis, how would I go about proving it false?
What I think the point is about, is that if I already believe that phlogiston was the reason fire is hot and bright, and I observe fire being both hot and bright, then I think this proves that phlogiston is the reason fire is hot and bright. When actua...
I really enjoyed reading this post, especially its connection with the Pearl's belief propagation algorithm in bayesian networks.Thank you Eliezer!
This is a great layperson explanation of the belief propagation algorithm.
However, the phlogiston example doesn't show how this algorithm is improperly implemented in humans. To show this, you need an example of incorrect beliefs drawn from a correct model, i.e. good input to the algorithm resulting in bad output. The phlogiston model was clearly incorrect. As other commenters have pointed out, contemporary scientists were painfully aware of this, and have eventually abandoned the model. Bad output from bad input doesn't demonstrate a bug in implementation...
Sadly, we humans can't rewrite our own code, the way a properly designed AI could.
Sure we can!
In fact, we can't stop rewriting our own code.
When you use the word "code" to describe humans, you take a certain degree of semantic liberty. So we first need to understand what is meant by "code" in this context.
In artificial computing machines, code is nothing more than a state of a chunk of memory hardware that causes the computation hardware to operate in a certain way (for a certain input). Only a tiny subset of the possible states of a...
Another interesting (and sad) example: during the conversation between Deepak Chopra and Richard Dawkins here, Deepak Chopra used the words "quantum leap" as an "explanation" for the origin of language, the origin of life, jumps in the fissile record, etc.
Edit: Finally he claimed it was a metaphor.
There's a fair amount of hindsight bias going on with this critique of phlogiston. Phlogiston sounds plausible on the surface. It's a reasonable conjecture to make given the knowledge at the time and certainly worth investigation. Is it really any less absurd to postulate some mystery substance in the air that essentially plays the same role? If they'd chosen the latter, we'd be lauding them for their foresight.
It's perfectly feasible to draw up tables of the presumed phlogiston content of various materials and use this to deduce the mass of the residue af...
The phlogiston theory gets a bad rap. I 100% agree with the idea that theories need to make constraints on our anticipations, but i think you're taking for granted all the constraints phlogiston makes.
The phlogiston theory is basically a baby step towards empiricism and materialism. Is it possible that our modern perspective causes us to take these things for granted to the point that the steps phlogiston ads aren't noticed? In another essay you talk about walking through the history of science, trying to imagine being in the perspective of someone taken in by a new theory, and i found that practice particularly instructive here. I came up with a number of ways in which this theory DOES constrain anticipation. Seeing these predictions may make it easier to help raise new predictions for existing theories, as well as suggest that theories don't need to be rigorous and mathematical in order to constrain the space of anticipations.
The phlogiston theory says "there is no magic here, fire is caused by some physical property of the substances involved in it". By modern standards this does nothing to constrain anticipation further, but from a space of total ig...
Phlogiston was the 18 century's answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright "fire" stuff? Why does the wood transform into ash? To both questions, the 18th-century chemists answered, "phlogiston".
...and that was it, you see, that was their answer: "Phlogiston."
Phlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the "true material". Flames in enclosed containers went out because the air became saturated with phlogiston, and so could not hold any more. Charcoal left little residue upon burning because it was nearly pure phlogiston.
Of course, one didn't use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to explain it. It's not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, "The air must have become saturated with phlogiston." You couldn't even use phlogiston theory to say what you ought not to see; it could explain everything.
This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don't feel fake. That's what makes them dangerous.
Modern research suggests that humans think about cause and effect using something like the directed acyclic graphs (DAGs) of Bayes nets. Because it rained, the sidewalk is wet; because the sidewalk is wet, it is slippery:
[Rain] -> [Sidewalk wet] -> [Sidewalk slippery]
From this we can infer—or, in a Bayes net, rigorously calculate in probabilities—that when the sidewalk is slippery, it probably rained; but if we already know that the sidewalk is wet, learning that the sidewalk is slippery tells us nothing more about whether it rained.
Why is fire hot and bright when it burns?
["Phlogiston"] -> [Fire hot and bright]
It feels like an explanation. It's represented using the same cognitive data format. But the human mind does not automatically detect when a cause has an unconstraining arrow to its effect. Worse, thanks to hindsight bias, it may feel like the cause constrains the effect, when it was merely fitted to the effect.
Interestingly, our modern understanding of probabilistic reasoning about causality can describe precisely what the phlogiston theorists were doing wrong. One of the primary inspirations for Bayesian networks was noticing the problem of double-counting evidence if inference resonates between an effect and a cause. For example, let's say that I get a bit of unreliable information that the sidewalk is wet. This should make me think it's more likely to be raining. But, if it's more likely to be raining, doesn't that make it more likely that the sidewalk is wet? And wouldn't that make it more likely that the sidewalk is slippery? But if the sidewalk is slippery, it's probably wet; and then I should again raise my probability that it's raining...
Judea Pearl uses the metaphor of an algorithm for counting soldiers in a line. Suppose you're in the line, and you see two soldiers next to you, one in front and one in back. That's three soldiers. So you ask the soldier next to you, "How many soldiers do you see?" He looks around and says, "Three". So that's a total of six soldiers. This, obviously, is not how to do it.
A smarter way is to ask the soldier in front of you, "How many soldiers forward of you?" and the soldier in back, "How many soldiers backward of you?" The question "How many soldiers forward?" can be passed on as a message without confusion. If I'm at the front of the line, I pass the message "1 soldier forward", for myself. The person directly in back of me gets the message "1 soldier forward", and passes on the message "2 soldiers forward" to the soldier behind him. At the same time, each soldier is also getting the message "N soldiers backward" from the soldier behind them, and passing it on as "N+1 soldiers backward" to the soldier in front of them. How many soldiers in total? Add the two numbers you receive, plus one for yourself: that is the total number of soldiers in line.
The key idea is that every soldier must separately track the two messages, the forward-message and backward-message, and add them together only at the end. You never add any soldiers from the backward-message you receive to the forward-message you pass back. Indeed, the total number of soldiers is never passed as a message—no one ever says it aloud.
An analogous principle operates in rigorous probabilistic reasoning about causality. If you learn something about whether it's raining, from some source other than observing the sidewalk to be wet, this will send a forward-message from [rain] to [sidewalk wet] and raise our expectation of the sidewalk being wet. If you observe the sidewalk to be wet, this sends a backward-message to our belief that it is raining, and this message propagates from [rain] to all neighboring nodes except the [sidewalk wet] node. We count each piece of evidence exactly once; no update message ever "bounces" back and forth. The exact algorithm may be found in Judea Pearl's classic "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference".
So what went wrong in phlogiston theory? When we observe that fire is hot, the [fire] node can send a backward-evidence to the ["phlogiston"] node, leading us to update our beliefs about phlogiston. But if so, we can't count this as a successful forward-prediction of phlogiston theory. The message should go in only one direction, and not bounce back.
Alas, human beings do not use a rigorous algorithm for updating belief networks. We learn about parent nodes from observing children, and predict child nodes from beliefs about parents. But we don't keep rigorously separate books for the backward-message and forward-message. We just remember that phlogiston is hot, which causes fire to be hot. So it seems like phlogiston theory predicts the hotness of fire. Or, worse, it just feels like phlogiston makes the fire hot.
Until you notice that no advance predictions are being made, the non-constraining causal node is not labeled "fake". It's represented the same way as any other node in your belief network. It feels like a fact, like all the other facts you know: Phlogiston makes the fire hot.
A properly designed AI would notice the problem instantly. This wouldn't even require special-purpose code, just correct bookkeeping of the belief network. (Sadly, we humans can't rewrite our own code, the way a properly designed AI could.)
Speaking of "hindsight bias" is just the nontechnical way of saying that humans do not rigorously separate forward and backward messages, allowing forward messages to be contaminated by backward ones.
Those who long ago went down the path of phlogiston were not trying to be fools. No scientist deliberately wants to get stuck in a blind alley. Are there any fake explanations in your mind? If there are, I guarantee they're not labeled "fake explanation", so polling your thoughts for the "fake" keyword will not turn them up.
Thanks to hindsight bias, it's also not enough to check how well your theory "predicts" facts you already know. You've got to predict for tomorrow, not yesterday. It's the only way a messy human mind can be guaranteed of sending a pure forward message.