poke alleges:
"Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn't mean it's not included in the actual social institution of science that produces actual real science here in the real world; it's your description and not science that is faulty."
I know I've been calling my younger self "stupid" but that is a figure of speech; "unskillfully wielding high intelligence" would be more precise. Eliezer18 was not in the habit of making obvious mistakes—it's just that his "obvious" wasn't my "obvious".
No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself.
Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences. Just like Eliezer18.
"Consciousness is caused by quantum gravity" has testable implications: It implies that you should be able to look at neurons and discover a coherent quantum superposition (whose collapse?) contributes to information-processing, and that you won't ever be able to reproduce a neuron's input-output behavior using a computable microanatomical simulation...
...but even after you say "Consciousness is caused by quantum gravity", you don't anticipate anything about how your brain thinks "I think therefore I am!" or the mysterious redness of red, that you did not anticipate before, even though you feel like you know a cause of it. This is a tremendous danger sign, I now realize, but it's not the danger sign that I was warned against, and I doubt that Penrose was ever told of it by his thesis advisor. For that matter, I doubt that Niels Bohr was ever warned against it when it came time to formulate the Copenhagen Interpretation.
As far as I can tell, the reason Eliezer18 and Sir Roger Penrose and Niels Bohr were not warned, is that no standard warning exists.
I did not generalize the concept of "mysterious answers to mysterious questions", in that many words, until I was writing a Bayesian analysis of what distinguishes technical, nontechnical and semitechnical scientific explanations. Now, the final output of that analysis, can be phrased nontechnically in terms of four danger signs:
- First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
- Second, the hypothesis has no moving parts—the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.
- Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
- Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.
In principle, all this could have been said in the immediate aftermath of vitalism. Just like elementary probability theory could have been invented by Archimedes, or the ancient Greeks could have theorized natural selection. But in fact no one ever warned me against any of these four dangers, in those terms—the closest being the warning that hypotheses should have testable consequences. And I didn't conceptualize the warning signs explicitly until I was trying to think of the whole affair in terms of probability distributions—some degree of overkill was required.
I simply have no reason to believe that these warnings are passed down in scientific apprenticeships—certainly not to a majority of scientists. Among other things, it is advice for handling situations of confusion and despair, scientific chaos. When would the average scientist or average mentor have an opportunity to use that kind of technique?
We just got through discussing the single-world fiasco in physics. Clearly, no one told them about the formal definition of Occam's Razor, in whispered apprenticeship or otherwise.
There is a known effect where great scientists have multiple great students. This may well be due to the mentors passing on skills that they can't describe. But I don't think that counts as part of standard science. And if the great mentors haven't been able to put their guidance into words and publish it generally, that's not a good sign for how well these things are understood.
Reasoning in the absence of definite evidence without going instantaneously completely wrong is really really hard. When you're learning in school, you can miss one point, and then be taught fifty other points that happen to be correct. When you're reasoning out new knowledge in the absence of crushingly overwhelming guidance, you can miss one point and wake up in Outer Mongolia fifty steps later.
I am pretty sure that scientists who switch off their brains and relax with some comfortable nonsense as soon as they leave their own specialties, do not realize that minds are engines and that there is a causal story behind every trustworthy belief. Nor, I suspect, were they ever told that there is an exact rational probability given a state of evidence, which has no room for whims; even if you can't calculate the answer, and even if you don't hear any authoritative command for what to believe.
I doubt that scientists who are asked to pontificate on the future by the media, who sketch amazingly detailed pictures of Life in 2050, were ever taught about the conjunction fallacy. Or how the representativeness heuristic can make more detailed stories seem more plausible, even as each extra detail drags down the probability. The notion of every added detail needing its own support—of not being able to make up big detailed stories that sound just like the detailed stories you were taught in science or history class—is absolutely vital to precise thinking in the absence of definite evidence. But how would a notion like that get into the standard scientific apprenticeship? The cognitive bias was uncovered only a few decades ago, and not popularized until very recently.
Then there's affective death spirals around notions like "emergence" or "complexity" which are sufficiently vaguely defined that you can say lots of nice things about them. There's whole academic subfields built around the kind of mistakes that Eliezer18 used to make! (Though I never fell for the "emergence" thing.)
I sometimes say that the goal of science is to amass such an enormous mountain of evidence that not even scientists can ignore it: and that this is the distinguishing feature of a scientist, a non-scientist will ignore it anyway.
If there can exist some amount of evidence so crushing that you finally despair, stop making excuses and just give up—drop the old theory and never mention it again—then this is all it takes to let the ratchet of Science turn forward over time, and raise up a technological civilization. Contrast to religion.
Books by Carl Sagan and Martin Gardner and the other veins of Traditional Rationality are meant to accomplish this difference: to transform someone from a non-scientist into a potential scientist, and guard them from experimentally disproven madness.
What further training does a professional scientist get? Some frequentist stats classes on how to calculate statistical significance. Training in standard techniques that will let them churn out papers within a solidly established paradigm.
If Science demanded more than this from the average scientist, I don't think it would be possible for Science to get done. We have problems enough from people who sneak in without the drop-dead-basic qualifications.
Nick Tarleton summarized the resulting problem very well—better than I did, in fact: If you come up with a bizarre-seeming hypothesis not yet ruled out by the evidence, and try to test it experimentally, Science doesn't call you a bad person. Science doesn't trust its elders to decide which hypotheses "aren't worth testing". But this is a carefully lax social standard, and if you try to translate it into a standard of individual epistemic rationality, it lets you believe far too much. Dropping back into the analogy with pragmatic-distrust-based-libertarianism, it's the difference between "Cigarettes shouldn't be illegal" and "Go smoke a Marlboro".
Do you remember ever being warned against that mistake, in so many words? Then why wouldn't people make exactly that error? How many people will spontaneously go an extra mile and be even stricter with themselves? Some, but not many.
Many scientists will believe all manner of ridiculous things outside the laboratory, so long as they can convince themselves it hasn't been definitely disproven, or so long as they manage not to ask. Is there some standard lecture that grad students get, of which people see this folly, and ask, "Were they absent from class that day?" No, as far as I can tell.
Maybe if you're super lucky and get a famous mentor, they'll tell you rare personal secrets like "Ask yourself which are the important problems in your field, and then work on one of those, instead of falling into something easy and trivial" or "Be more careful than the journal editors demand; look for new ways to guard your expectations from influencing the experiment, even if it's not standard."
But I really don't think there's a huge secret standard scientific tradition of precision-grade rational reasoning on sparse evidence. Half of all the scientists out there still believe they believe in God! The more difficult skills are not standard!
Caledonian: What evidence do you offer us that mathematical descriptions cannot produce the properties of which you speak?
First of all, let's be clear regarding what we have to work with. Things are complicated a little by the variety of specific theories and formalisms used in physics, but let's take multi-particle quantum mechanics in the configuration basis as illustrative. The configurations are all of the form 'A particle of species a1 at location x1, and a particle of species a2 at location at x2,...', and so forth. The quantum states consist of associations of complex numbers with such configurations. There is the basic dynamical fact that a quantum state ψ evolves into another state ψ + dψ according to the Schrödinger equation, and (if you're not taking the many-worlds path) Born's postulate that the probability of there actually being particles a1, a2,... at locations x1, x2,... is |ψ|^2.
Then there are various entities and facts that can be obtained from these through abstraction, deduction, and comparison, e.g. 'the number of particles in configuration c' or 'the average number of expected particles in quantum state ψ, as calculated via the Born probabilities' or 'the Hilbert-space inner product of states ψ1 and ψ2'. We could, if necessary, describe a formal combinatorial grammar describing all and only those entities and facts implied by the theory-defining postulates in my first paragraph. It would amount to saying: the entities and relationships directly postulated by the theory exist, and so do those which can be logically or mathematically inferred from those postulates. But speaking informally, all we have to work with are featureless spatial configurations of point particles, superpositions thereof, dynamics of superpositions, and empirical probabilities derived from superpositions.
And what sort of entity or property are we trying to extract from the theory, if we are trying to derive consciousness from physics? It's tiresome to resort repeatedly to the same example, but nonetheless, let's consider color: the variety of hues and shades which we lump together into the natural language categories of red, blue, and so forth. (I put it that way because I do not want to turn this into a discussion of whether those natural language categories are "natural kinds". Focus instead on the numerous instances of color which populate visual experience and which unquestionably exist, regardless of how they get categorized.) On one side we have "quantity and causality", as I put it above - and I'll even throw in spatial geometry and dispositional behavior; on the other side, the colors. How might we go about making the latter out of the former?
There are some things we can do. We can quantify certain things about subjective color; and we can describe certain physical realities which are somehow correlated with color. Thus 450-nm wavelength light "is" a type of blue light. But I submit that it makes no sense to say that when you see a particular shade of blue, you are "seeing a length"; or that blue itself "is a length". That might do as a poetic description of the physics behind the perception, but as an ontological statement, it simply substitutes the correlated geometric property for the sensory property we are trying to explain.
Another approach is the cognitive one: things are blue because your nervous system classified them that way. But although the correlated purely-physical property is a lot more complicated here, it's the same story. Put informally, to use this as an explanation of blueness is to say that our perceptions turn blue because we call them blue or think they are blue.
I think Dennett would understand my point, but as usual he bites the bullet and denies that color is there. He calls it "figment" - figmentary pigment - because according to physics, there is nothing actually blue, inside or outside one's head. But blueness is there, therefore that ontology is wrong.
"Emergence" is a popular dodge: colors and other subjective properties, though not being identical with any elementary physical property, somehow "emerge" when a brain enters the picture. Apart from being vague, that's just dualism: if the emergent properties are not identical with one of the purely physical properties in that combinatorial grammar I mentioned, then it is different from all of them, no matter how correlated it is.
As I said, my answer is to turn it around, and to say that the existence of blueness (etc) is axiomatic, and so it must be one of the things that a true and complete theory of reality would be about. It is as if one were to look at electromagnetism and say, my God, those things we thought were lengths, they're actually colors! - rather than vice versa. But it's also my thesis that when you look at doing this in detail, some of the obvious candidates for this ontological inversion, such as "computational states of neurons", present too many specific difficulties to work (in that case, because a computational state of a meso-scale system like a neuron is a vague property, microphysically speaking). Thus I find myself pursuing quantum ontological exotica.