Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A putative new idea for AI control; index here.
- Nasty extrapolation of concepts (though badly implemented value learning or badly coded base concepts).
- AI's making themselves into nasty expected utility maximisers.
- AI's hacking themselves to maximum reward.
- AI's creating successor agents that differ from them in dangerous ways.
- People hacking themselves to maximum apparent happiness.
- Problems with Coherent Extrapolated Volition.
- Problems with unrestricted search.
- Some issues I have with some of Paul Christiano's designs.
- Reflective equilibrium itself.
Speaking very broadly, there are two features all them share:
- The convergence criteria are self-referential.
- Errors in the setup are likely to cause false convergence.
What do I mean by that? Well, imagine you're trying to reach reflective equilibrium in your morality. You do this by using good meta-ethical rules, zooming up and down at various moral levels, making decisions on how to resolve inconsistencies, etc... But how do you know when to stop? Well, you stop when your morality is perfectly self-consistent, when you no longer have any urge to change your moral or meta-moral setup. In other words, the stopping point (and the the convergence to the stopping point) is entirely self-referentially defined: the morality judges itself. It does not include any other moral considerations. You input your initial moral intuitions and values, and you hope this will cause the end result to be "nice", but the definition of the end result does not include your initial moral intuitions (note that some moral realists could see this process dependence as a positive - except for the fact that these processes have many convergent states, not just one or a small grouping).
So when the process goes nasty, you're pretty sure to have achieved something self-referentially stable, but not nice. Similarly, a nasty CEV will be coherent and have no desire to further extrapolate... but that's all we know about it.
The second feature is that any process has errors - computing errors, conceptual errors, errors due to the weakness of human brains, etc... If you visualise this as noise, you can see that noise in a convergent process is more likely to cause premature convergence, because if the process ever reaches a stable self-referential state, it will stay there (and if the process is a long one, then early noise will cause great divergence at the end). For instance, imagine you have to reconcile your belief in preserving human cultures with your beliefs in human individual freedom. A complex balancing act. But if, at any point along the way, you simply jettison one of the two values completely, things become much easier - and once jettisoned, the missing value is unlikely to ever come back.
Or, more simply, the system could get hacked. When exploring a potential future world, you could become so enamoured of it, that you overwrite any objections you had. It seems very easy for humans to fall into these traps - and again, once you lose something of value in your system, you don't tend to get if back.
And again, very broadly speaking, there are several classes of solutions to deal with these problems:
- Reduce or prevent errors in the extrapolation (eg solving the agent tiling problem).
- Solve all or most of the problem ahead of time (eg traditional FAI approach by specifying the correct values).
- Make sure you don't get too far from the starting point (eg reduced impact AI, tool AI, models as definitions).
- Figure out the properties of a nasty convergence, and try to avoid them (eg some of the ideas I mentioned in "crude measures", general precautions that are done when defining the convergence process).
Harper’s Fishing Nets: a review of Plato’s Camera by Paul Churchland
Paul Churchland published Plato’s Camera to defend the thesis that abstract objects and properties are both real and natural, consisting in learned mental representations of the timeless, abstract features of the mind’s environment. He holds that the brain learns, without supervision, high-dimensional maps of objective feature domains – which he calls Domain-Portrayal Semantics. He further elaborates that homomorphisms between these high-dimensional maps allow the brain to occasionally repurpose a higher-quality map to understand a completely different domain, reducing the latter to the former. He finally adds a Map-Portrayal Semantics of language to his Domain-Portrayal Semantics of thought by considering the linguistic, cultural, educational dimensions of human learning.
Surely the title of this review already sounds like some terrible joke is about to be perpetrated, but in fact it merely indicates a philosophical difference between myself and Paul Churchland. Churchland wrote Plato’s Camera not merely to explain a view on philosophy of mind to laypeople and other philosophers, but with the specific goal of defending Platonism about abstract, universal properties and objects (such as those used in mathematics) by naturalizing it. The contrast between such naturalist philosophers as Churchland, Dennett, Flanegan, and Railton and non-naturalist or weakly naturalist philosophy lies precisely in this fact: the latter consider many abstract or intuitive concepts to necessarily form their own part of reality, amenable strictly to philosophical investigation, while the former seek and demand a conscilience of causal explanation for what’s going on in our lives. The results are a breath of fresh air to read.
A great benefit of reading strongly naturalistic philosophy and philosophers is that, over the effort of researching a philosophical position, they tend to absorb so much scientific material that they can’t help but achieve a degree of insight and accuracy in their core thesis – even when getting almost all the details wrong! So it is with Plato’s Camera: reading in 2015 a book published in 2012, that mostly does not cite any scientific research from the past five to ten years, the details can’t help but seem somewhat dated and unrealistic, at least to those of us who’ve been doing our own reading in related scientific literature (or possibly just have partisan opinions). And yet, Plato’s Camera captures and supports a core thesis, this being more or less:
- The brain contains or embodies (high-dimensional) maps of objective domains, and by Hebbian updating over time, the map comes to resemble the territory, be it conceptual (as with mono-directional neural networks) or causal (as with recurrent networks). This is Churchland’s Domain-Portrayal Semantics theory of thought, and Churchland calls the learning process behind it First-Level Learning.
- Homomorphisms between these (high-dimensional) maps, albeit imperfect ones, allow the brain to notice when one objective domain is reducible to another, and thus deploy its existing conceptual knowledge in new ways. Churchland calls this process Second-Level Learning, and it further bolsters the organism’s ability to navigate reality (as well as implementing reductionism at the heart of Churchland’s epistemology). In a rather more insightful point for a reader to take away from Churchland’s book, this reduction does not invalidate the old map, but in fact supports its veracity, the accuracy with which the map portrays its territory, in the subdomain where the old map works at all. Churchland thus argues for an “optimistic meta-induction”, by which he means that in a Pragmatic Empiricist sense, our past, present, and future scientific knowledge is and will be reliable knowledge about the world, to the extent it agrees with data, even in the absence of a Grand Unified Theory of All Reality.
- While the senses allow nonhuman animals to index their maps (a “You Are Here!” marker is how Churchland describes it), language allows humans to deliberately and artificially index each-others’ maps, thus allowing us to create long-lived cultural and institutional traditions of knowledge that accumulate over time rather than dying with individuals. Progress thus extends beyond the span of an individual lifetime. This Third-Level Learning allows Churchland to add an implicit Map Portrayal Semantics theory of language to his Domain-Portrayal theory of thought, although I do not recall him naming that implicit theory as such.
It is these core theses which I regard as largely correct, even where their supporting details are based on old research or the wrong research in the view of the present reviewer. I even believe that had Churchland done as much investigation into my favorite school of computational cognitive science, it would have reinforced his thesis and given him enough material for two books instead of just one. In fact, my disagreements with Churchland can be summed up quite succinctly:
- I believe, and will supply citations for the belief, that probabilistic representations play more role in human cognition, despite making little appearance in Plato’s Camera. In particular, I find Churchland’s defense of Hebbian learning for encoding causal knowledge in recursive deep neural-nets somewhat unconvincing, preferring instead the presentation of .
- I find Churchland’s thesis that recursive, many-layered learning allows animals (not only humans) to map abstract features of their environment incredibly insightful, but disagree that this can correctly be called Platonism. Platonism concerns itself with abstract universals (and Churchland says it does). I feel that recursive, many-layered learning allows organisms to map the abstract features of their local environment, while making no guarantees regarding the universal applicability of maps learned from finite information about local territory.
- Platonism is also often about specific objects (such as those of mathematics or ethics) that are claimed to abstractly exist. This notion brings in the important spectrum in cognitive science between feature-governed concepts and causal role-governed concepts. “Electron”, for instance, is actually a theory-laden concept defined chiefly by the causal role(s) involved – but we usually think of electrons as “not very Platonic” while metric spaces are “more Platonic” and Categorical Imperatives are “very extremely Platonic”. I feel that while the mind may posit objects which model certain feature-spaces and fill certain causal roles very elegantly, if those objects are not available, even counterfactually, to multiple modalities from which to sample feature data, I can’t help but suspect they might not really “exist” in a mind-independent sense. This probably sounds like quite a nitpick, but immense portions of the things dreamt-of in human philosophies depend on one’s position on this question. (In fact, confusing a causal role with an object or substance lies at the heart of many superstitions.) On the other hand, we should consider it an open question whether or not “Platonic” abstractions form a necessary component of resource-rational cognition.
- I feel that imperfect (implied to be linear) homomorphism between maps doesn’t work very well as a theory of Second-Level Learning, as any real computational system capable of representing the entire physical world would have to be Turing-complete. Since the representation language would be Turing-complete, the total extensional equivalence of any two models would necessarily be undecidable. And this undecidability arises long before the creature begins to think in the kinds of self-referential terms for which undecidability theorems have been made famous! Dealing with this issue in a sane way remains a major open research problem for anyone proposing to theorize on the workings of the mind.
And yet, for all that these may sound substantial, they are the sum total of my objections. Churchland has otherwise written an excellent book that gets its point across well, and whose many moments of snark against non-naturalistic philosophies of mind, especially the linguaformal “Hilbert proof system theory of mind”, are actually enjoyable (at least, to one who enjoys snark).
In fact, in addition to just describing Churchland’s work, I will spend some of my review noting where other work bolsters it, particularly from the rational analysis (and resource-rational) school of cognitive science. This school of thought aims to understand the mind by first assuming that the mind is posed particular, constrained problems by its environment, then positing how these problems can be optimally solved, and then comparing the resulting theoretical solutions with experimental data. The mind is thus understand as an approximately boundedly rational engine of inference, forced by its environment to deal with shortages of sample data and computational power in the most efficient way possible, but ultimately trying to perform well-defined tasks such as predicting environmental stimuli or plan rewarding actions for the embodied organism to take.
Why “Harper’s Fishing Nets”, then? Well, because treating abstract universals as computational objects learned by generalizing over many domains seems more along the lines of Robert Harper’s “computational trinitarianism” than true Platonism, and because the noisy, always-incomplete process of recursive learning seems more like a succession of fishing nets, with their ropes spaced differently to catch specific species of fish, than like a camera that takes a single, complete picture. All learning algorithms aim to capture the structural information in their input samples while ignoring the noise, but the difference is, of course, undecidable. Recursive pattern recognition - the unsupervised recognition of patterns in already-transformed feature representations - may thus be applicable for capturing additional levels of structural information, especially where causal learning prevents collapsing all levels of hierarchy into a single function. Or, as Churchland himself puts it:
Since these thousands of spaces or ‘maps’ are all connected to one another by billions of axonal projections and trillions of synaptic junctions, such specific locational information within one map can and does provoke subsequent pointlike activations in a sequence of downstream representational spaces, and ultimately in one or more motor-representation spaces, whose unfolding activations are projected onto the body’s muscle systems, thereby to generate cognitively informed behaviors.
Churchland is especially to be congratulated for approaching cognition as a capability that must have evolved in gradual steps, and coming up with a theory that allows for nonhuman animals to have great cognitive abilities in First-Level Learning, even if not in Second and Third.
Choice quotes from the Introductory section:
- Since these thousands of spaces or ‘maps’ are all connected to one another by billions of axonal projections and trillions of synaptic junctions, such specific locational information within one map can and does provoke subsequent pointlike activations in a sequence of downstream representational spaces, and ultimately in one or more motor-representation spaces, whose unfolding activations are projected onto the body’s muscle systems, thereby to generate cognitively informed behaviors.
- The whole point of the synapse-adjusting learning process discussed above was to make the behavior of neurons that are progressively higher in the information-processing hierarchy profoundly and systematically dependent on the activities of the neurons below them.
- [Trained neural networks represent] a space that has a robust and built-in probability metric against which to measure the likelihood, or unlikelihood, of the objective feature represented by that position’s ever being instantiated.
- Indeed, the “justified-true-belief” approach is misconceived from the outset, since it attempts to make concepts that are appropriate only at the level of cultural or language-based learning do the job of characterizing cognitive achievements that lie predominantly at the sublinguistic level.
It is no understatement to say that First-Level Learning forms the shining star of Churchland’s book. It is the process by which the brain forms and updates increasingly accurate maps of conceptual and causal reality, a deeply Pragmatic process shared with nonhuman animal and taking place largely below conscious awareness. In machine-learning terms, First-Level Learning consists mainly of classification and regression problems: classifying hierarchies of regions of compacted metric spaces to form concepts using feedforward neural learning, and regressing trajectories through state-spaces to form causal understanding using recurrent neural networks. One full chapter each is spent on the former and the latter subjects.
He begins in his first chapter on First-Level Learning with a basic introduction to many-layered feedforward neural networks, their training via supervised backpropagation of errors, and their usage for classification of feature-based concepts. He talks about the nonlinear activation functions, like sign and sigmoid, necessary to allow feedforward networks to approximate arbitrary total functions. He gives examples of face-recognition neural networks, which will probably be old-hat for any student of machine learning, but are extremely necessary for laypeople and philosophers untrained in computational approaches to modelling perception. Churchland is also careful to specify that these are not the neural networks of the real human mind, but instead specific examples of what can be done with neural networks. Finally, Churchland begins defending his thesis about Platonism when talking about an artificial neural network designed to classify colors:
[W]e can here point to the first advantage of the information-compression effected by the Hurvich network: it gives us grip on any object’s objective color that is independent of the current background level of illumination.
Or put simply, the kinds of abstract, higher-level features learned by multi-layer neural networks serve to represent certain objective facts about the environment, with each successively lower layer of the network filtering out some perceptual noise and capturing some important structural information.
Churchland also elaborates, in several places, on the compaction of metric-space produced by the nonlinear transformations encoded in neural networks. Neural networks don’t spread their training data uniformly in the output space (or in any of the spaces formed by the intermediate layers of the network)! In fact, they tend to push their training points into highly compacted prototype regions in their output spaces, and when later activated they will try to “divert” any given vector into one of those compacted regions, depending on how well it resembles them in the first place. Since all neural networks receive and produce vectors, and vector spaces are metric spaces, Churchland notes that these neural-network concepts innately and necessarily carry distance metrics for gauging the similarities or differences between any two sensory feature-vectors (or, Churchland implies, real-world objects represented by abstract feature vectors). Churchland even notes, in a rare mention of probability in his book, that these compactions into distinct prototype regions for classes or clusters of training data can even be taken as a sort of emerging set of probability density functions over the training data:
The regions of the two prototypical hot spots represent the highest probability of activation; the region of stretched lines in between them represents a very low probability; and the empty regions outside the deformed grid are not acknowledged as possibilities at all—no activity at the input layer will produce [such] a second-rung activation pattern[.]
Churchland deploys the vector-completion effect in feedforward networks as an example of primitive abductive reasoning himself:
Accordingly, it is at least tempting to see, in this charming capacity for relevant vector completion, the first and most basic instances of what philosophers have called “inference-to-the-best-explanation,” and have tried, with only limited success, to explicate in linguistic or propositional terms.
Churchland deploys his metaphor of concepts as maps of feature-spaces again and again to great effect; I only wish he had taken greater effort to talk of his rarely-mentioned “deformed grids” as topographical maps, measures over the training data, and of the nonlinear transformations taken vectors from their input spaces into topologically-measured maps as flows or rivers. I cannot tell if he took seriously the notion of neural-network training as learning topographical maps of the non-input spaces, or of those topographies as measures in the sense of probability theory. Certainly, the physical metaphor of a river’s flow provides a good intuition pump for describing how well-trained neural network carves out paths from where drops of rain fall to where they ought to go, by whatever criterion trains the network. Certainly, he seems to be thinking something along these lines when he uses metric-space compaction to examine category effects:
This gives us, incidentally, a plausible explanation for so-called ‘category effects’ in perceptual judgments. This is the tendency of normal humans, and of creatures generally, to make similarity judgments that group any two within-category items as being much more similar (to each other) than any other two items, one of which is inside and one of which is outside the familiar category. Humans display this tilt even when, by any “objective measure” over the unprocessed sensory inputs, the similarity measures across the two pairs are the same.
Looking at neural-network training data as measurable would also help us think about how mere perception generates “sensorily simple” random variables, representing qualitative measurements of the world that correspond to the world, which would then be of use according to probabilistic theories of cognition. Certainly, a number of cognitive scientists and neuroscientists have been researching neural mechanisms for representing probabilities[13, 19]. A number of these even provide exactly the kind of approximate Bayesian inference one would require when working with open-world models that can have countably infinitely many separate random variables, an important component of working with Turing-complete modelling domains. One paper even proposes that the neural implementation and learning of probability density/mass functions can explain certain deviations of human judgements from the probabilistic optimum. Again: Churchland’s book, published in 2012 and sent to press without little mention of probability, still clearly prefigured neural encodings of probability, which have turned out to be a productive research effort. This is a testament to how well Churchland has generalized from what previous neuroscientific research he did have!
Of course, Churchland himself decries any notion of sensorily simple variables:
This story thus assumes that our sensations also admit of a determinate decomposition into an antecedent alphabet of ‘simples,’ simples that correspond, finally, to an antecedent alphabet of ‘simple’ properties in the environment.
Churchland would also have done quite well to cover the Blessing of Abstraction and hierarchical modelling (first mentioned in ) for their unique effect: they allow training data to be shared across tasks and categories, and thus ameliorate the Curse of Dimensionality. They are how real embodied minds compress their sensory features so as to reduce the necessary sample-complexity of learning to the absolute minimum: sometimes even one single example. I personally hypothesize that the same effect is at work in hierarchical Bayesian modelling as in the recent fad for “deep” learning in artificial neural networks, which learn hierarchies of features: breadth in the lower layers of the model/network provides large amounts of information to quickly train the higher, “abstract” layer of the model/network, which then provides a strong inductive bias to the lower layers. He does mention something like this, however:
[A]s the original sensory input vector pursues its transformative journey up the processing ladder, successively more background information gets tapped from the synaptic matrices driving each successive rung.
This certainly gives an insight into why deep neural networks with sparse later layers work so well: sample information is aggregated in the top layers and then backpropagated to lower layers.
This brings us right back to the Platonism for which Churchland is trying to argue. As usual, we wish to operate under the “game rules” of a very strong naturalism, in which Platonic entities are surely not allowed to be any kind of ontologically “spooky” stuff. After all, we don’t observe any spooky processes interfering in ordinary physical and computational causality to generate thoughts about Platonic Forms or mathematical structures. Instead, we observe embodied, resource-bounded creatures generalizing from data, even if Churchland is a pure connectionist while I favor a probabilistic language of thought. What sort of Platonism would help us explain what goes on in real minds? I think a productive avenue is to view Platonic abstractions as concepts (necessarily compositional concepts of the kind Churchland doesn’t address much, but which are now sometimes described as stochastic functions) which optimally compress a given type of experiential data. We could thus propose Platonic realism about abstract concepts which any reasoner must necessarily develop as they approach the limit of increasing sample data and computational power, and simultaneously Platonic antirealism about abstract concepts which tend to disappear as reasoners gain more information and compute further.
This will probably sound somewhat overwrought and unnecessary to theorists from backgrounds in algorithmic information theory and artificial intelligence. What need does the optimally intelligent “agent”, AIXI, have for Platonic concepts of anything? It just updates a distribution over all possible causal structures and uses it to make predictions. The key is that AIXI evaluates K(x), the Kolmogorov complexity of each possible Turing-machine program. This function allows a Solomonoff Inducer to perfectly separate the random information in its sensory data from the structural information, yielding an optimal distribution over representations that contain nothing but causal structure. This is incomputable, or requires infinite algorithmic information – AIXI can update optimally on sensory information by falling back on its infinite computing power. Such a reasoner, it seems, has no need to compose or decompose causal structures, no need for concepts, but for everyone else, hierarchical representations compress data very efficiently. They also map well onto probabilistic modelling. This trade-off between the decomposability and the degree of compression achieved by any given representation of a concept will have to play a part in a more complete theory of abstract objects as optimally compressed stochastic functions.
Here, though, is a reason for learned representations to be “white-box”, open to introspection and decomposition into smaller concepts: counterfactual-causal reasoning involves zeroing in on a particular random variable in a model and cutting its links to its causal parents. Only white-box representations allow this “graph surgery”; only open-box representations are friendly to causal reasoning about independent, composable concepts rather than whole possible-worlds.
And Churchland does cover causal reasoning! Or at least, he covers reasoning and learning in sequence-prediction tasks, with an elaborate theory of First-Level Learning in recurrent neural networks. Whether this counts as causal reasoning or not depends on whether the reader considers causal reasoning to require modelling counterfactuals and doing graph-surgery to support interventions. Churchland begins by explaining exactly why an embodied organism should want to reason about temporal sequences:
Two complex interacting objects were each outfitted with roughly two dozen small lights attached to various critical parts thereof, and the room lights were then completely extinguished, leaving only the attached lights as effective visual stimuli for any observer. A single snapshot of the two objects, in the midst of their mobile interaction, presented a meaningless and undecipherable scatter of luminous dots to any naïve viewer. But if a movie film or video clip of those two invisible objects were presented, instead of the freeze-frame snapshot, most viewers could recognize, within less than a second and despite the spatial poverty of the visual stimuli described, that they were watching two humans ballroom-dancing in the pitch dark.
Churchland starts his chapter on temporal and causal learning thusly, noting that for an embodied animal, temporal reasoning provides not only an essential way to handle ecologically necessarily tasks, but a dramatic improvement on the performance of moment-to-moment cognitive distinctions. Thus he theorizes that creatures understand causal models as trajectories through metrically-sculpted activation spaces of recurrent neural networks, isomorphic to the execution traces of a computer program. In fact, he tells the reader, extending an animal’s reasoning in Time helps it to cut reality at the joints, so much so that temporal reasoning may have come first.
[I]t is at least worth considering the hypothesis that prototypical causal or nomological processes are what the brain grasps first and best. The creation and fine-tuning of useful trajectories in activation space may be the primary obligation of our most basic mechanisms of learning.
He further points out that, since the function of the autonomic nervous system has always been to regulate cyclical bodily processes, recurrent neural networks may actually be the norm in living animals, and could easily have evolved first for autonomic functions before being adapted to aid in temporal cognition. The brain, then, is conceived as a network-of-networks, capable of activating the recurrent evolution of its sub-networks whenever it needs to imagine how some temporal (or computational) process might proceed:
Our network, that is, is also capable of imaginative activity. It is capable of representational activity that is prompted not by a sensory encounter with an instance of the objective reality therein represented, but rather by ‘top-down’ stimulation of some kind from elsewhere within a larger encompassing network or ‘brain.’
Much of the material from the previous chapter on supervised learning, Hebbian unsupervised learning, and map metaphors is repeated and carried over in this chapter, the better to hammer it home.
Now the unfortunate negative. Churchland’s account of conceptual and causal First-Level Learning spends too little explanatory effort, for my tastes at least, on causal-role concepts in particular. Philosophy of mind has long given both feature-governed and role-governed notions of concept, and the cognitive sciences have shown how general learning mechanisms can produce concepts governed by mixtures of sensory features and causal or relational roles. In fact, causal-role concepts appear to form a bedrock for uniquely human thought: humans and other highly intelligent, social animals learn concepts abstracted from their available feature data, of “what something does” rather than “how something looks”. This is how human thought gains its infinitely productive compositionality. In fact, we often utilize concepts grounded so thoroughly in causal role, and so little in feature data, that we forget they “look like” anything at all (more on that when we cover Second-Level Learning and naturalization)! Churchland explicitly mentions how we ought to be able to “index” our “maps” via multiple input modalities, thus enabling us to use concepts abstracted from any one way of obtaining or producing feature data:
Choose any family of familiar observational terms, for whichever sensory modality you like (the family of terms for temperature, for example), and ask what happens to the semantic content of those terms, for their users, if every user has the relevant sensory modality suddenly and permanently disabled. The correct answer is that the relevant family of terms, which used to be at least partly observational for those users, has now become a family of purely theoretical terms. But those terms can still play, and surely will play, much the same descriptive, predictive, explanatory, and manipulative roles that they have always played in the conceptual commerce of those users.
He just doesn’t say how the brain does so.
He also gives a theory for identifying maps with each-other, which is to find a homomorphism taking the contents of one map into the contents of the other:
[T]hey do indeed embody the same portrayal, then, for some superposition of respective map elements, the across-map distances, between any distinct map element in (a) and its ‘nearest distinct map element’ in (b), will fall collectively to zero for some rotation of map (b) around some appropriate superposition point.
This works just fine for his given example of two-dimensional highway maps, but (at least we have solid reason to think) cannot work when the maps themselves come to express a Turing-complete mode of computation, as in recurrent neural networks. The equality of lambda expressions in general is undecidable, after all; the only open question is whether we can determine equality in some useful, though algorithmically random, subset of cases (as is common in theoretical computer science), or whether we can find some sort of approximate equality-by-degrees that works well-enough for creatures with limited information.
The “map” metaphor also elides the fact that computation, in neural networks, takes place at the synapses, not in the neurons. The actual work is done by the nonlinear transformations of vectors between layers of neurons.
Churchland also fails to elaborate on the differences between training neural networks via backpropagation of errors and training them via Hebbian update rules. This is important: as far as my own background reasoning can find, backpropagation of errors suffices to train a neural network to approximate any circuit (or even any computable partial function if we deal with recurrent networks), while even the most general form of unsupervised Hebbian learning seems to learn the directions of variation within a set of feature vectors, rather than general total or partial recursive functions over the input data.
Churchland on free will:
Freedom, on this view, is ultimately a matter of knowledge—knowledge sufficient to see at least some distance into one’s possible futures at any given time, and knowledge of how to behave so as to realize, or to enhance the likelihood of, some of those alternatives at the expense of the others.
He extends the matter up to whole societies:
And as humanity’s capacity for anticipating and shaping our economic, medical, industrial, and ecological futures slowly expands, so does our collective freedom as a society. That capacity, note well, resides in our collective scientific knowledge and in our well-informed legislative and executive institutions.
On unsupervised learning without a preestablished system of propositions (as is used in most current Bayesian methods), in defense of connectionism:
What is perhaps most important about this kind of learning process, beyond its being biologically realistic right down to the synaptic level of physiological activity, is that it does not require a conceptual framework already in place, a framework fit for expressing propositions, some of which serve as hypotheses about the world, and some of which serve as evidence for or against those hypotheses.
Second-Level Learning: Reductionism, Hierarchies of Theories, Naturalization, and the Progress of the Sciences
If Churchland’s material on First-Level Learning seems, in some ways, like so much outmoded hype about neural networks, his material on Second-Level Learning remains sufficient justification to read his book. Second-Level Learning, the process by which the mind notices that it can repurpose its available conceptual “maps”, and thus comes to form an increasingly unified and coherent picture of the world, is where Churchland hits his (as ever, understated) stride. In addressing Second-Level Learning, Churchland covers the well-worn philosophy-of-science progression of physics from Aristotelian intuitive theories up through Newton and then, eventually, Einstein. This is also where he begins to talk about normatively rational reasoning:
Both history and current experience show that humans are all too ready to interpret puzzling aspects of the world in various fabulous or irrelevant terms, and then rationalize away their subsequent predictive/manipulative failures, if those failures are noticed at all.
Second-Level Learning is described as just turning old ideas to new uses. The brain more-or-less randomly notices the partial homomorphism of two conceptual “maps” (again: high-dimensional vector spaces with metric compaction based on Hebbian learning in neural networks) and repurposes (and re-trains) the more accurate, detailed, and general “map” (call it the larger map) to predict and describe the phenomena once encompassed by the less accurate, less detailed, and less general “map” (call it the smaller one). Viewed in the larger historical context Churchland gives it, however, Second-Level Learning is the methodology of scientific thought as we have come to understand it. Churchland gives solid reason to hypothesize that by means of Second-Level Learning, human beings and humankind have come to understand our world.
In larger terms, Second-Level Learning consists of naturalizing concepts in terms of other concepts, forming hierarchies of theories.
Our knowledge begins as a vast, disconnected, disparate mish-mash of independent concepts and theories, none of which makes sense in terms of the others, and which leaves us no recourse to any universal terms of explanation. Worse, our intuitive theories are often so disconnected that we may have only one modality of causal access to the objective reality behind any particular concept, perhaps even one so utterly unreliable as subjective introspection.
As we proceed to assemble interlocking hierarchies of theories, however, the increased connectedness of our theories allows us to spread the training information derived from experience and experiment throughout, letting us use the feature-modality behind one concept to inquire about the objective reality behind a seemingly different concept. By judicious application of Second-Level Learning, we develop an increasingly coherent, predictive, unified body of knowledge about the objective reality in which we find ourselves. We also become able to dissolve concepts that no longer make sense by showing what explains their training experiences, and sometimes come to be rationally obligated to reject concepts and theories that just no longer fit our experiences. Consilience can thus be seen as the key to truth, overcoming the exclaimed cries - “But thou must!” - of intuition or apparently-logical argumentation.
This is where Churchland feels a definite need to argue with other major philosophers of science, particularly Karl Popper’s falsificationism (still a staple of many methodology and philosophy-of-science lessons given to grad students everywhere):
Popper’s story of the proper relation between science and experience was also too simple-minded. Formally speaking, we can always conjure up an ‘auxiliary premise’ that will put any lunatic metaphysical hypothesis into the required logical contact with a possible refuting observation statement. …
The supposedly possible refutation of a scientific hypothesis “H” at the hands of “if H then not-O” and “O” can be only as certain as one’s confidence in the truth of “O.”… Unfortunately, given the theory-laden character of all concepts, and the contextual contingencies surrounding all perceptions, no observation statement is ever known with certainty, a point Popper himself acknowledges. So no hypothesis, even a legitimately scientific one, can be refuted with certainty – not ever. One might shrug one’s shoulders and acquiesce in this welcome consequence, resting content with the requirement that possible observations can at least contradict a genuinely scientific hypothesis, if not refute it with certainty.
Heavy and contentious words already, but well in line with the basic facts about learning and inference discovered by the pioneers of statistical learning theory: as long as one’s theory remains fully deterministic and one’s reasoning fully deductive, one must place absolute faith in experience (which, to wit, experience tells us is unreliable) and meaningfully eliminate hypotheses slowly, if ever. Abductive inference, not deductive, forms the core of real-world scientific reasoning, and one is reminded of Broad’s calling inductive reasoning, “the glory of Science” and yet “the scandal of Philosophy”. Having adopted abduction of inferred models, subject to revision, we can now justify those inferences much better than we could when philosophers talked of inductive reasoning about the certain truth or falsity of propositions. Churchland continues into territory even surer to arouse controversy, among the public if not among professional scientists or philosophers:
But this [revision to Popper given above] won’t draw the required distinction either, even if we let go of the requirement of decisive refutability for generalized hypotheses. The problem is that presumptive metaphysics can also creep into our habits of perceptual judgment, as when an unquestioning devout sincerely avers, “I feel God’s disapproval” or “I see God’s happiness,” when the rest of us would simply say, “I feel guilty” or “I see a glorious sunset.” This possibility is not just a philosopher’s a priori complaint: millions of religious people reflexively approach the perceivable world with precisely the sorts of metaphysical concepts just cited.
Throughout this latter portion of the book, Churchland takes numerous other shots at superstition, religion, model-theoretic philosophical theories of semantics, non-natural normativity, and various other forms of belief in the spooky and weird (whatever joke I may appear to be making here is paraphrased straight from Churchland’s own views). Regarding the last item on the list in particular, Churchland does indeed take an explicit stand in favor of naturalizing normative rationality via Second-Level Learning:
Since we cannot derive an “ought” from an “is,” continues the objection, any descriptive account of the de facto operations of a brain must be strictly irrelevant to the question of how our representational states can be justified, and to the question of how a rational brain ought to conduct its cognitive affairs. … An immediate riposte points out that our normative convictions in any domain always have systematic factual presuppositions about the nature of that domain. … A second riposte points out that a deeper descriptive appreciation of how the cognitive machinery of a normal or typical brain actually functions, so as to represent the world, is likely to give us a much deeper insight into the manifold ways in which it can occasionally fail to function to our representational advantage, and a deeper insight into what optimal functioning might amount to.
This objection to the “is-ought gap” should be happily received by cognitive scientists everywhere: it is certainly impossible to prove that an algorithm solves a given problem optimally, or even approximately, when we do not know what the problem is. What certain schools of thinking about rationality tend to fail to appreciate is that, particularly when dealing with highly constrained problems of abductive reasoning, we also cannot prove that a certain algorithm is very bad (in failing to approximate or approach an optimal solution, even in the limit of increasing resources) without knowing what the problem to be solved actually is.
Churchland backs up these ideas with a cogent analogy:
Imagine now a possible eighteenth century complaint, raised just as microbiology and biochemistry were getting started, that such descriptive scientific undertakings were strictly speaking a waste of our time, at least where normative matters such as Health are concerned, a complaint based on the ‘principle’ that “you can’t derive an ought from an is.” … Our subsequent appreciation of the various viral and bacteriological origins of the pantheon of diseases that plague us, of the operations of the immune system, and of the endless sorts of degenerative conditions that undermine our normal metabolic functions, gave us an unprecedented insight into the underlying nature of Health and its many manipulable dimensions. Our normative wisdom increased a thousand-fold, and not just concerning means-to-ends, but concerning the identity and nature of the ’ultimate’ ends themselves.
… The nature of Rationality, in sum, is something we humans have only just begun to penetrate, and the cognitive neurosciences are sure to play a central role in advancing our normative as well as our descriptive understanding, just as in the prior case of Health.
How, then, does Second-Level Learning proceed in the actual, physical brain?
Here the issue is whether the acquired structure of one of our maps mirrors in some way (that is, whether it is homomorphic with) some substructure of the second map under consideration. Is the first map, perhaps, simply a more familiar and more parochial version of a smallish part of the larger and more encompassing second map?
Churchland has, earlier in the book, already proposed an algorithm for inferring the degree to which two maps seem to portray the same domain, and he is deploying it here to explain how the brain can perform inter-theoretic reductions. The only problem, to my eyes, is that as stated above, this algorithm proposes to solve an undecidable problem when we begin to deal with the Turing-complete hypothesis-space represented by recurrent neural networks (and considering finite recurrent networks as learning deterministic finite-state automata just reduces our problem from undecidable to EXPTIME-complete).
On the question of how we come to intertheoretic reductions, Churchland opined that they occur more-or-less randomly, or at least unpredictably:
Most importantly, such singular events are flatly unpredictable, being the expression of the occasionally turbulent transitions, from one stable regime to another, of a highly nonlinear dynamical system: the brain.
Thanks to later work, we know that Churchland erred at least somewhat on this point, but that doesn’t make Churchland’s view of intertheoretic reductions irredeemable. Quite to the contrary, later work has ridden to the rescue of Churchland’s Second-Level Learning, presenting us with a map of the landscape of scientific hierarchies. The statistical nature of this map of maps is worth quoting directly for its elegance:
Recent studies of nonlinear, multiparameter models drawn from disparate areas in science have shown that predictions from these models largely depend only on a few ’stiff’ combinations of parameters [6, 8, 9]. This recurring characteristic (termed ’sloppiness’) appears to be an inherent property of these models and may be a manifestation of an underlying universality . Indeed, many of the practical and philosophical implications of sloppiness are identical to those of the renormalization group (RG) and continuum limit methods of statistical physics: models show weak dependance of macroscopic observables (defined at long length and time scales) on microscopic details. They thus have a smaller effective model dimensionality than their microscopic parameter space .
The objective reality we confront on a daily basis not only can be modelled at multiple levels of abstraction, but in order to utilize our experiential data as efficiently as possible, we must model it at multiple levels of abstraction. Macroscopic models explain more of the variation in observable data with fewer parameters, while microscopic models successfully explain a larger portion of the total available data by including even the “sloppier” parameters. How large is the trade-off between these models, in terms of necessary data and generalization power? Extremely large:
Eigenvalues [of the Fisher Information Matrix] are normalized to unit stiffest value; only the first 10 decades are shown. This means that inferring the parameter combination whose eigenvalue is smallest shown would require ~1010 times more data than the stiffest parameter combination. Conversely, this means that the least important parameter combination is times less important for understanding system behavior.
The amounts of variation explained by expanding combinations of parameters are distributed exponentially: the plurality of variation can usually be captured with very few parameters (as with intuitive theories that are “fuzzy” even on the mesoscopic scale), the majority with relatively few parameters (as with macroscopically accurate models that ignore microscopic reality), and the whole of variation explained by recourse to increasingly many parameters (as in microscopic models). Note that this exponential distribution of variance explanation adds weight to the Platonism of optimal compressions advocated above, and to Churchland’s Platonism: in order to make efficient use of available experiential data to explain variance and predict well in varying environments, we must form certain abstract concepts, and we must either form them into hierarchies (or to take from mathematical logic, entailment preorders of probabilistic conditioning). An embodied mind most likely cannot feasibly function in real-time without modelling what Churchland calls “the timeless landscape of abstract universals that collectively structure the universe” (even if one doesn’t accord those abstracts any vaunted metaphysical status).
What, then, can we call an intertheoretic reduction, on a modelling level? The perfect answer would be: a deterministic, continuous function from the high-dimensional parameter space of a microscopic model (which has a simple deterministic component but vast uncertainty about parameters) to the low-dimensional parameter space of a macroscopic model (which makes less precise, more stochastic predictions, but allows for more certainty about parameters). In a rare few cases, we can even construct such a function: consider temperature as the average kinetic energy, thus derived from the average velocity, of a body of particles. Even though we cannot feasibly obtain the sample data to know the individual velocity of tens of millions of particles in a jar of air, our microscopic model tells us that averaging those tens of millions of parameters will give us the single macroscopic parameter we call temperature, which is as directly observable as anything via a simple thermometer (whose usage is just another model for the human scientist to learn and employ). Churchland even gives us an example of how these connections between theories aid a nonhuman creature in its everyday cognition:
Who would have felt that the local speed of sound was something which could be felt? But it can, and quite accurately, too. Of what earthly use might that be? Well, suppose you are a bat, for example. The echo-return time of a probing squeak, to which bats have accurate access, gives you the exact distance to an edible target moth, if you have running access to the local speed of sound.
Usually, intertheoretic reductions are more probabilistic than this, though. Newton generalized his Laws of Motion and calculated the motion of the planets under his laws of gravitation for himself, rather than possessing a function that would construct Kepler’s equations from his. This looks more like evaluating a likelihood function and selecting as his “microscopic” theory the one which gave a higher likelihood to the available data while having a larger support set, as in probabilistic interpretations of scientific reasoning.
We face a substantial difficulty in employing hierarchies of theories to explain the natural world around us: our meso-scale observable variables are very distantly abstracted from the microscopic phenomena that, under our best scientific theories, form the foundations of reality. On the one hand, this is reassuring: our microscopic theories require huge amounts of free parameters precisely because they reduce large, complex things to aggregations of smaller, simpler things. Since we need many small things to make a large thing, we should find that thinking of the large thing in terms of its constituent small things requires huge amounts of information. However, this also implies that our descriptions of fundamental reality are far more theory-laden than our descriptions of our everyday surroundings. We suffer from a polarization in which humanly intuitive theories and theories of the fundamentals of reality come to occupy the opposite sides of our hierarchy. Thus:
The process presents itself as a meaningless historical meander, without compass or convergence. Except, it would seem, in the domain of the natural sciences.
We might call it a symptom of that very polarization that human beings require strict intellectual training to successfully think in a naturalistic, scientific way – Churchland has really switched to philosophy of science instead of mind in this part of the book. Our intuitive theories tend to explain most of the variance visible in our observables, but nonetheless don’t predict all that well. As a result, we tend to just intuitively accept that we can’t entirely understand the world. In fact, modern science has obtained more success from trying to find additional observables that will let us get accurate data about the (usually) less influential, smaller-scale structure and parameters of reality. As Churchland describes it:
Such experimental indexings can also be evaluated for their consistency with distinct experimental indexings made within distinct but partially overlapping interpretive practices, as when one uses distinct measuring instruments and distinct but overlapping conceptual maps to try to ‘get at’ one and the same phenomenon.
“Naturalization” of concepts thus turns out to come in two kinds of inference rather than one. “Upwards” naturalizations, let us say, string a connection from more microscopic theories to more macroscopic concepts. “Downwards” naturalizations, the traditional mode of intertheoretic reduction, connect existing macroscopic concepts and theories to more microscopic theories, exploiting the thoroughness and simplicity of the microscopic theory to provide a well-informed inductive bias to the more macroscopic theory. This inductive bias embodies what we learned, as we developed the microscopic theory, about all the observables we used to learn that theory. We can thus see that both kinds of naturalizations connect our concepts and theories to additional observable variables, thus enabling quicker and more accurate inductive training.
In combination with causal-role concepts and theories thereof, this all comes back to Churchland’s defense of the thesis that abstract objects and properties are both real and natural. The greater the degree of unity we attain in our hierarchical forests of abstract concepts and theories, the more we can justify those abstractions by reference to their role in successful causal description of concrete observations, rather than by abstracted argumentation. The more we naturalize our concepts, the more we feel licensed by Indespensability Arguments to call them real abstract universals (or at least, real abstract generalities of the neighborhood of reality we happen to live in), despite their being mere inferred theories bound ultimately to empirical data.
Certain naive forms of scientific realism would thus say that we are thus, through our scientific progress, coming to understand reality on a single, supreme, fundamental level. Churchland disagrees, and I concur with his disagreement.
That our sundry overlapping maps frequently enjoy adjustments that bring them into increasing conformity with one another (even as their predictive accuracy continues to increase) need not mean that there is some Ur map toward which all are tending.
To the contrary, a single Ur-map would be an extremely high-dimensional model, would require an extremely large amount of data to train, and would carry an extraordinarily large chance of overfitting after we had trained it. Entailment preorders of maps compress and represent experiential data far more efficiently than a single Ur-map, even if we know there exists a single underlying objective reality. In fact, we might often possess multiple maps of similar, or even identical, objective domains:
Two maps can differ substantially from each other, and yet still be, both of them, highly accurate maps of the same objective reality. For they may be focusing on distinct aspects or orthogonal dimensions of that shared reality. Reality, after all, is spectacularly complex, and it is asking too much of any given map that it capture all of reality (see, e.g., Giere 2006).
Churchland emphasizes that the final emphasis must be on empiricism and (sometimes counterfactual) observability:
What is important, for any map to be taken seriously as a representation of reality, is that somehow or other, however indirectly, it is possible to index it. …So long as every aspect of reality is somehow in causal interaction with the rest of reality, then every aspect of reality is, in principle, at least accessible to critical cognitive activity. Nothing guarantees that we will succeed in getting a grip on any given aspect. But nothing precludes it, either.
Churchland is, of course, reciting the naturalist creed by stating that “every aspect of reality is somehow in causal interaction with the rest of reality” (or at least, it was in its past or will be in its future). This is a bullet both he and I can gladly bite, however. I can also add that since Second-Level Learning enables us to cohere our concepts into vast, inter-related preorders over time, it also enables us to gain increasing certainty about which conceptual maps refer to real abstract objects (optimal generalizations of properties of other maps), real concrete objects (which participate directly in causality), and apparent objects actually derived from erroneous inferences. As we learn more and integrate our concepts, real concrete and abstract objects come to be tied together, whereas unreal concrete objects (like superstitions) or abstract objects (like false philosophical frameworks) come to be increasingly isolated in our framework of maps of the world. A more integrated, naturalistic explanation for the experiential phenomena which originally gave birth to a model of unreal concrete or abstract objects can, if we allow ourselves to admit it into our worldview, clear up the experiential confusion and clear away the “zombie concepts”.
In the third major part of the book, although the shortest, we finally arrive to the domain of learning and thought in which we deal exclusively with human beings communicating via language. Churchland opens the chapter almost apologetically:
The reader will have noticed, in all of the preceding chapters, a firm skepticism concerning the role or significance of linguaformal structures in the business of both learning and deploying a conceptual framework. This skepticism goes back almost four decades …. In the intervening period, my skepticism on this point has only expanded and deepened, as the developments - positive and negative - in cognitive psychology, classical AI, and the several neurosciences gave both empirical and theoretical substance to those skeptical worries. As I saw these developments, they signaled the need to jettison the traditional epistemological playing field of accepted or rejected sentences, and the dynamics of logical or probabilistic inference that typically went with it.
Unfortunately, this statement appears to ignore the close links between probabilistic inference and the entire rest of statistical learning theory, including the neural networks that form the foundation for Churchland’s theory of cognition in the First-Level Learning chapters. Alas.
Still, Churchland’s skepticism regarding the “language of thought” hypothesis makes a great deal of intuitive sense. It takes thorough study to learn the difference between formal systems (sets of axioms demonstrated to have a model) from the foundations of mathematics, and formal languages (notations for computations) in the science of computing, although Douglas Hofstadter did write the world’s premier “pop comp-sci” text on exactly that matter. Furthermore, any given spoken or written sentence, in formal or informal language, contains fairly little communicable information relative to the size of an entire mental model of a relevant domain, as Churchland has spotted:
We must doubt this [sentential] perspective, indeed, we must reject it, because theories themselves are not sets of sentences at all. Sentences are just the comparatively low-dimensional public stand-ins that allow us to make rough mutual coordinations of our endlessly idiosyncratic conceptual frameworks or theories, so that we can more effectively apply them and evaluate them.
Unlike in much of analytic philosophy, the science of computing takes programs and programming languages to simply be different ways of writing down calculations, to the point that the field of denotational semantics for programming remains relatively small relative to the study of proving which computations the program carries out. A hypothesis regarding neurocomputation which can explain how learning and commonsense reasoning take place would apply, via the Church-Turing Thesis, to neural nets as well as Turing machines.
Third-Level Learning is perhaps a misnomer, since as far as I know, it does not actually come third in any particular causal or historical ordering. After all, humans communicated ideas, and thus carried out Third-Level Learning, long before we ever engaged seriously in reductionist science, and if standardized test scores show anything at all, they surely show that our societies have invented sophisticated systems devoted to ensuring that existing ideas are passed down to children as-is. In fact, the educational system often performs quite reliably, in the sense that the children consistently pass their exams, even if we all ritually lament the failure to pass down the true understanding and clarity once achieved by discoverers, inventors, and teachers. Such true understanding, Churchland would say, involves a high-dimensional conceptual map sculpted by large sums of experiential data. Perhaps we indeed ought to pessimistically expect that such high-dimensional understanding cannot be passed down accurately, even though teaching is a well-developed science (albeit, one prone to fads whose occasional serious results are also often ignored in favor of “how it’s always been done” or “the strong students will survive”). After all, as Churchland says:
[W]e have no public access to those raw sensory activation patterns [which sculpted our conceptual frameworks], as such.
Third-Level Learning, then, consists in using a Map-Portrayal Semantics for language (and other forms of human communication) to pass down maps that, according to the Domain Portrayal Semantics Churchland posits, accurately portray some piece of local reality. It may come before or after Second-Level Learning in our history, but it surely occurs. By means of evocative and descriptive language, human beings can index each-other’s maps and even, through carefully chosen series of evocations, describe their conceptual maps to each-other. Although other vocalizing species - such as wolves, nonhuman great apes, and some marine mammals - display the former ability to signal to each-other with sound, humans are exclusive in having the latter ability: to systematically educate each-other, passing on whole conceptual frameworks from their original discoverers to vast social peer-groups. By this means, human intellectual life surpasses the individual human:
While the collective cognitive process steadily loses some of its participants to old age, and adds fresh participants in the form of newborns, the collective process itself is now effectively immortal. It faces no inevitable termination.
One might think that little can be said about education by someone other than a professional expert on education, but Churchland does have an important point to make in describing Third-Level Learning: it is a form of learning, not a form of something other than learning. In particular, he explicitly criticizes the “memetic” theory of cultural “evolution”, for attempting to ground culture in Darwinist principles without making any reference to such obvious participants in culture as the mind and brain:
The dynamical parallels between a virus-type and a theory-type are pretty thin. …Dawkins’ story, though novel and agreeably naturalistic, once again attempts, like so many other accounts before it, to characterize the essential nature of the scientific enterprise without making any reference to the unique kinematics and dynamics of the brain.
Similarly, no account of science or rationality that confines itself to social-level mechanisms alone will ever get to the heart of that matter. For that, the microstructure of the brain and the nature of its microactivities are also uniquely essential.
Churchland also notes that reasoning can work, even when individual reasoners don’t quite understand how or why they reason, as in the case of scientists with too little knowledge of methodology:
For the scientists themselves may indeed be confabulating their explanations within a methodological framework that positively misrepresents the real causal factors and the real dynamics of their own cognitive behaviors.
In fact, he even demands that we account for the Third-Level Learning and reasoning of others in such “unclean” fields as politics:
For better or for worse, the moral convictions of those agents will play a major role in determining their voting behavior. To be sure, one may be deeply skeptical of the moral convictions of the citizens, or the senators, involved. Indeed, one may reject those convictions entirely, on the grounds that they presuppose some irrational religion, for example. But it would be foolish to make a policy of systematically ignoring those assembled moral convictions (even if they are dubious), if one wants to understand the voting behavior of the individuals involved.
Churchland also notes how successful Third-Level Learning ultimately requires engaging, sometimes, in successful Second-Level Learning, attributed to Kuhnian “paradigm shifts”:
As we have seen, Kuhn describes such periods of turmoil as ‘crisis science,’ and he explains in some illustrative detail how the normal pursuit of scientific inquiry is rather well-designed to produce such crises, sooner or later. I am compelled to agree with his portrayal, for, on the present account, ‘normal science,’ as discussed at length by Kuhn, just is the process of trying to navigate new territory under the guidance of an existing map, however rough, and of trying to articulate its often vague outlines and to fill in its missing details as the exploration proceeds.
He then ends the book on a positive note:
All told, the metabolisms of humans are wrapped in the benign embrace of an interlocking system of mechanisms that help to sustain, regulate, and amplify their (one hopes) healthy activities, just as the cognitive organs of humans are wrapped in the benign embrace of an interlocking system of mechanisms that help to sustain, regulate, and amplify their (one hopes) rational activities.
Unfortunately, I do feel that this “up-ending” opens Churchland to a substantive criticism, namely: he has failed to address anything outside the sciences. Since most actually existing humans are neither scientists nor science hobbyists, one would think that a book about the brain would bother to address the vast domains of human life outside the halls of academic science, lest one be reminded of Professor Smith in Piled Higher and Deeper justifying the professorial career pyramid just by making everything outside academic science sound scary.
I suppose that Churchland’s own career and position as a philosopher of mind and science led him to write as chiefly addressing domains he thoroughly understands, but I, at least, think his core thesis draws strength from its potential applications outside those domains. If Churchland, and much other literature, can explain a naturalistic theory of how the brain comes to understand abstract, immaterial objects and properties in such domains as science and mathematics, then why not in, say, aesthetics, ethics, or the emotional life? Among the first abstract properties posited at the beginnings of any human culture are beauty and goodness, among the first abstract objects, the soul. It may sound suddenly religious to speak of the soul when talking about science and statistical modelling, but eliminativism on these “soulful” objects and properties has always stood as the largest bullet for naturalists to bite. Having a constructive-naturalist theory to apply to “soulful” subjects of inquiry could turn the bitter bullet into a harmless sugar pill.
Churchland also spent an entire book talking about the brain without ever once mentioning subjective consciousness/experience, for reasons of, I suspect, the same sort of greedy eliminativism.
However, that might just mean I need to put both Churchland’s earlier work - like Matter and Consciousness, Engine of Reason, Seat of the Soul, and Patricia Churchland’s Braintrust - on my reading list to see what they have to say on such subjects.
 C. E. Freer, D. M. Roy, and J. B. Tenenbaum. Towards common-sense reasoning via conditional simulation: Legacies of Turing in Artificial Intelligence. Turing’s Legacy (ASL Lecture Notes in Logic), 2012.
 Marcus Hutter. Universal algorithmic intelligence: A mathematical top-down approach. In B. Goertzel and C. Pennachin, editors, Artificial General Intelligence, Cognitive Technologies, pages 227–290. Springer, Berlin, 2007.
 John C. Kieffer. A tutorial on hierarchical lossless data compression. In Moshe Dror, Pierre L’Ecuyer, and Ferenc Szidarovszky, editors, Modeling Uncertainty, volume 46 of International Series in Operations Research & Management Science, pages 711–733. Springer US, 2005.
 Thomas L. Griffiths Noah D. Goodman, Joshua B. Tenenbaum and Jacob Feldman. Compositionality in rational analysis: Grammar-based induction for concept learning. In Nick Chater and Mike Oaksford, editors, The Probabilistic Mind: Prospects for Bayesian Cognitive Science. Oup Oxford, 2008.
 Ruslan Salakhutdinov, Joshua B. Tenenbaum, and Antonio Torralba. One-shot learning with a hierarchical nonparametric bayesian model. Journal of Machine Learning Research - Proceedings Track, 27:195–206, 2012.
 Lei Shi and Thomas L. Griffiths. Neural implementation of hierarchical bayesian inference by importance sampling. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1669–1677. Curran Associates, Inc., 2009.
We're pleased to announce the release of "Smarter Than Us: The Rise of Machine Intelligence", commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? This new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
Special thanks to all those at the FHI, MIRI and Less Wrong who helped with this work, and those who voted on the name!
Related: Cached Thoughts
Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".
Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.
I went away and formulated the concept of a "Philosophical Landmine".
I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".
The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.
It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.
That is, "consequentialism" was a Philosophical Landmine.
In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you're lucky, that's all. If not, you start chanting counter-slogans and the whole thing goes supercritical.
It's usually not so bad, and no one is literally "chanting slogans". There may even be some original phrasings involved. But the conversation has been derailed.
So how do these "philosophical landmine" things work?
It looks like when a lot has been said on a confusing topic, usually something in philosophy, there is a large complex of slogans and counter-slogans installed as cached thoughts around it. Certain words or concepts will trigger these cached thoughts, and any attempt to mitigate the damage will trigger more of them. Of course they will also trigger cached thoughts in other people, which in turn... The result being that the conversation rapidly diverges from the original point to some useless yet heavily discussed attractor.
Notice that whether a particular concept will cause trouble depends on the person as well as the concept. Notice further that this implies that the probability of hitting a landmine scales with the number of people involved and the topic-breadth of the conversation.
Anyone who hangs out on 4chan can confirm that this is the approximate shape of most thread derailments.
Most concepts in philosophy and metaphysics are landmines for many people. The phenomenon also occurs in politics and other tribal/ideological disputes. The ones I'm particularly interested in are the ones in philosophy, but it might be useful to divorce the concept of "conceptual landmines" from philosophy in particular.
Here's some common ones in philosophy:
Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.
My purpose in bringing this up is mostly to precipitate some terminology and a concept around this phenomenon, so that we can talk about it and refer to it. It is important for concepts to have verbal handles, you see.
That said, I'll finish with a few words about what we can do about it. There are two major forks of the anti-landmine strategy: avoidance, and damage control.
Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don't say it. If something you say makes people go off on a spiral of bad philosophy, don't get annoyed with them, just fix what you say. This is just being a communications consequentialist. Figure out which concepts are landmines for which people, and step around them, or use alternate terminology with fewer problematic connotations.
If it happens, which it does, as far as I can tell, my only effective damage control strategy is to abort the conversation. I'll probably think that I can take those stupid ideas here and now, but that's just the landmine trying to go supercritical. Just say no. Of course letting on that you think you've stepped on a landmine is probably incredibly rude; keep it to yourself. Subtly change the subject or rephrase your original point without the problematic concepts or something.
A third prong could be playing "philosophical bomb squad", which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.
ADDENDUM: Since I didn't make it very obvious, it's worth noting that this happens with rationalists, too, even on this very forum. It is your responsibility not to contain landmines as well as not to step on them. But you're already trying to do that, so I don't emphasize it as much as not stepping on them.
One of the few things that I really appreciate having encountered during my study of philosophy is the Gettier problem. Paper after paper has been published on this subject, starting with Gettier's original "Is Justified True Belief Knowledge?" In brief, Gettier argues that knowledge cannot be defined as "justified true belief" because there are cases when people have a justified true belief, but their belief is justified for the wrong reasons.
For instance, Gettier cites the example of two men, Smith and Jones, who are applying for a job. Smith believes that Jones will get the job, because the president of the company told him that Jones would be hired. He also believes that Jones has ten coins in his pocket, because he counted the coins in Jones's pocket ten minutes ago (Gettier does not explain this behavior). Thus, he forms the belief "the person who will get the job has ten coins in his pocket."
Unbeknownst to Smith, though, he himself will get the job, and further he himself has ten coins in his pocket that he was not aware of-- perhaps he put someone else's jacket on by mistake. As a result, Smith's belief that "the person who will get the job has ten coins in his pocket" was correct, but only by luck.
While I don't find the primary purpose of Gettier's argument particularly interesting or meaningful (much less the debate it spawned), I do think Gettier's paper does a very good job of illustrating the situation that I refer to as "being right for the wrong reasons." This situation has important implications for prediction-making and hence for the art of rationality as a whole.
Simply put, a prediction that is right for the wrong reasons isn't actually right from an epistemic perspective.
If I predict, for instance, that I will win a 15-touch fencing bout, implicitly believing this will occur when I strike my opponent 15 times before he strikes me 15 times, and I in fact lose fourteen touches in a row, only to win by forfeit when my opponent intentionally strikes me many times in the final touch and is disqualified for brutality, my prediction cannot be said to have been accurate.
Where this gets more complicated is with predictions that are right for the wrong reasons, but the right reasons still apply. Imagine the previous example of a fencing bout, except this time I score 14 touches in a row and then win by forfeit when my opponent flings his mask across the hall in frustration and is disqualified for an offense against sportsmanship. Technically, my prediction is again right for the wrong reasons-- my victory was not thanks to scoring 15 touches, but thanks to my opponent's poor sportsmanship and subsequent disqualification. However, I likely would have scored 15 touches given the opportunity.
In cases like this, it may seem appealing to credit my prediction as successful, as it would be successful under normal conditions. However, I we have to resist this impulse and instead simply work on making more precise predictions. If we start crediting predictions that are right for the wrong reasons, even if it seems like the "spirit" of the prediction is right, this seems to open the door for relying on intuition and falling into the traps that contaminate much of modern philosophy.
What we really need to do in such cases seems to be to break down our claims into more specific predictions, splitting them into multiple sub-predictions if necessary. My prediction about the outcome of the fencing bout could better be expressed as multiple predictions, for instance "I will score more points than my opponent" and "I will win the bout." Some may notice that this is similar to the implicit justification being made in the original prediction. This is fitting-- drawing out such implicit details is key to making accurate predictions. In fact, this example itself was improved by tabooing "better" in the vague initial sentence "I will fence better than my opponent."
In order to make better predictions, we must cast out those predictions that are right for the wrong reasons. While it may be tempting to award such efforts partial credit, this flies against the spirit of the truth. The true skill of cartography requires forming both accurate and reproducible maps; lucking into accuracy may be nice, but it speaks ill of the reproducibility of your methods.
 I greatly suggest that you make tabooing a five-second skill, and better still recognizing when you need to apply it to your own processes. It pays great dividends in terms of precise thought.
Part of the sequence: Rationality and Philosophy
Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.
I've complained before that philosophy is a diseased discipline which spends far too much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn't know the slightest bit of 20th century science. Is that still the case?
You bet. There's some good philosophy out there, but much of it is bad enough to make CMU philosopher Clark Glymour suggest that on tight university budgets, philosophy departments could be defunded unless their work is useful to (cited by) scientists and engineers — just as his own work on causal Bayes nets is now widely used in artificial intelligence and other fields.
How did philosophy get this way? Russell's hypothesis is not too shabby. Check the syllabi of the undergraduate "intro to philosophy" classes at the world's top 5 U.S. philosophy departments — NYU, Rutgers, Princeton, Michigan Ann Arbor, and Harvard — and you'll find that they spend a lot of time with (1) old dead guys who were wrong about almost everything because they knew nothing of modern logic, probability theory, or science, and with (2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy. (I say more about the reasons for philosophy's degenerate state here.)
As the CEO of a philosophy/math/compsci research institute, I think many philosophical problems are important. But the field of philosophy doesn't seem to be very good at answering them. What can we do?
Why, come up with better philosophical methods, of course!
Part of the sequence: Rationality and Philosophy
Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.
The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.
After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.
Part of the sequence: Rationality and Philosophy
Consider these two versions of the famous trolley problem:
Stranger: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a stranger standing on the side track with his back turned, and if you proceed to thrown the switch, the five people will be saved, but the person on the side track will be killed.
Child: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a 12-year-old boy standing on the side track with his back turned, and if you proceed to throw the switch, the five people will be saved, but the boy on the side track will be killed.
Here it is: a standard-form philosophical thought experiment. In standard analytic philosophy, the next step is to engage in conceptual analysis — a process in which we use our intuitions as evidence for one theory over another. For example, if your intuitions say that it is "morally right" to throw the switch in both cases above, then these intuitions may be counted as evidence for consequentialism, for moral realism, for agent neutrality, and so on.
Alexander (2012) explains:
Philosophical intuitions play an important role in contemporary philosophy. Philosophical intuitions provide data to be explained by our philosophical theories [and] evidence that may be adduced in arguments for their truth... In this way, the role... of intuitional evidence in philosophy is similar to the role... of perceptual evidence in science...
Is knowledge simply justified true belief? Is a belief justified just in case it is caused by a reliable cognitive mechanism? Does a name refer to whatever object uniquely or best satisfies the description associated with it? Is a person morally responsible for an action only if she could have acted otherwise? Is an action morally right just in case it provides the greatest benefit for the greatest number of people all else being equal? When confronted with these kinds of questions, philosophers often appeal to philosophical intuitions about real or imagined cases...
...there is widespread agreement about the role that [intuitions] play in contemporary philosophical practice... We advance philosophical theories on the basis of their ability to explain our philosophical intuitions, and appeal to them as evidence that those theories are true...
In particular, notice that philosophers do not appeal to their intuitions as merely an exercise in autobiography. Philosophers are not merely trying to map the contours of their own idiosyncratic concepts. That could be interesting, but it wouldn't be worth decades of publicly-funded philosophical research. Instead, philosophers appeal to their intuitions as evidence for what is true in general about a concept, or true about the world.
Part of the sequence: Rationality and Philosophy
In my last post, I showed that the brain does not encode concepts in terms of necessary and sufficient conditions. So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.
Next, I want to show that human abstract thought is pervaded by metaphor, and that this has implications for how we think about the nature of philosophical questions and philosophical answers. As Lakoff & Johnson (1999) write:
If we are going to ask philosophical questions, we have to remember that we are human... The fact that abstract thought is mostly metaphorical means that answers to philosophical questions have always been, and always will be, mostly metaphorical. In itself, that is neither good nor bad. It is simply a fact about the capacities of the human mind. But it has major consequences for every aspect of philosophy. Metaphorical thought is the principal tool that makes philosophical insight possible, and that constrains the forms that philosophy can take.
To understand how fundamental metaphor is to our thinking, we must remember that human cognition is embodied:
We have inherited from the Western philosophical tradition a theory of faculty psychology, in which we have a "faculty" of reason that is separate from and independent of what we do with our bodies. In particular, reason is seen as independent of perception and bodily movement...
The evidence from cognitive science shows that classical faculty psychology is wrong. There is no such fully autonomous faculty of reason separate from and independent of bodily capacities such as perception and movement. The evidence supports, instead, an evolutionary view, in which reason uses and grows out of such bodily capacities.
Consider, for example, the fact that as neural beings we must categorize things:
We are neural beings. Our brains each have 100 billion neurons and 100 trillion synaptic connections. It is common in the brain for information to be passed from one dense ensemble of neurons to another via a relatively sparse set of connections. Whenever this happens, the pattern of activation distributed over the first set of neurons is too great to be represented in a one-to-one manner in the sparse set of connections. Therefore, the sparse set of connections necessarily groups together certain input patterns in mapping them across to the output ensemble. Whenever a neural ensemble provides the same output with different inputs, there is neural categorization.
To take a concrete example, each human eye has 100 million light-sensing cells, but only about 1 million fibers leading to the brain. Each incoming image must therefore be reduced in complexity by a factor of 100. That is, information in each fiber constitutes a "categorization" of the information from about 100 cells.
Moreover, almost all our categorizations are determined by the unconscious associative mind — outside our control and even our awareness — as we interact with the world. As Lakoff & Johnson note, "Even when we think we are deliberately forming new categories, our unconscious categories enter into our choice of possible conscious categories."
One of the core aims of the philosophy of probability is to explain the relationship between frequency and probability. The frequentist proposes identity as the relationship. This use of identity is highly dubious. We know how to check for identity between numbers, or even how to check for the weaker copula relation between particular objects; but how would we test the identity of frequency and probability? It is not immediately obvious that there is some simple value out there which is modeled by probability, like position and mass are values that are modeled by Newton's Principia. You can actually check if density * volume = mass, by taking separate measurements of mass, density and volume, but what would you measure to check a frequency against a probability?
There are certain appeals to frequentest philosophy: we would like to say that if a bag has 100 balls in it, only 1 of which is white, then the probability of drawing the white ball is 1/100, and that if we take a non-white ball out, the probability of drawing the white ball is now 1/99. Frequentism would make the philosophical justification of that inference trivial. But of course, anything a frequentist can do, a Bayesian can do (better). I mean that literally: it's the stronger magic.
A Subjective Bayesian, more or less, says that the reason frequencies are related to probabilities is because when you learn a frequency you thereby learn a fact about the world, and one must update one's degrees of belief on every available fact. The subjective Bayesian actually uses the copula in another strange way:
Probability is subjective degree of belief.
and subjective Bayesians also claim:
Probabilities are not in the world, they are in your mind.
These two statements are brilliantly championed in Probability is Subjectively Objective. But ultimately, the formalism which I would like to suggest denies both of these statements. Formalists do not ontologically commit themselves to probabilities, just as they do not say that numbers exist; hence we don't allocate probabilities in the mind or anywhere else; we only commit ourselves to number theory, and probability theory. Mathematical theories are simply repeatable processes which construct certain sequences of squiggles called "theorems", by changing the squiggles of other theorems, according to certain rules called "inferences". Inferences always take as input certain sequences of squiggles called premises, and output a sequence of squiggles called the conclusion. The only thing an inference ever does is add squiggles to a theorem, take away squiggles from a theorem, or both. It turns out that these squiggle sequences mixed with inferences can talk about almost anything, certainly any computable thing. The formalist does not need to ontologically commit to numbers to assert that "There is a prime greater than 10000.", even though "There is x such that" is a flat assertion of existence; because for the formalist "There is a prime greater than 10000." simply means that number theory contains a theorem which is interpreted as "there is a prime greater than 10000." When you say a mathematical fact in English, you are interpreting a theorem from a formal theory. If under your suggested interpretation, all of the theorems of the theory are true, then whatever system/mechanism your interpretation of the theory talks about, is said to be modeled by the theory.
So, what is the relation between frequency and probability proposed by formalism? Theorems of probability, may be interpreted as true statements about frequencies, when you assign certain squiggles certain words and claim the resulting natural language sentence. Or for short we can say: "Probability theory models frequency." It is trivial to show that Komolgorov models frequency, since it also models fractions; it is an algebra after all. More interestingly, probability theory models rational distributions of subjective degree of believe, and the optimal updating of degree of believe given new information. This is somewhat harder to show; dutch-book arguments do nicely to at least provide some intuitive understanding of the relation between degree of belief, betting, and probability, but there is still work to be done here. If Bayesian probability theory really does model rational belief, which many believe it does, then that is likely the most interesting thing we are ever going to be able to model with probability. But probability theory also models spatial measurement? Why not add the position that probability is volume to the debating lines of the philosophy of probability?
Why are frequentism's and subjective Bayesianism's misuses of the copula not as obvious as volumeism's? This is because what the Bayesian and frequentest are really arguing about is statistical methodology, they've just disguised the argument as an argument about what probability is. Your interpretation of probability theory will determine how you model uncertainty, and hence determine your statistical methodology. Volumeism cannot handle uncertainty in any obvious way; however, the Bayesian and frequentest interpretations of probability theory, imply two radically different ways of handling uncertainty.
The easiest way to understand the philosophical dispute between the frequentist and the subjective Bayesian is to look at the classic biased coin:
A subjective Bayesian and a frequentist are at a bar, and the bartender (being rather bored) tells the two that he has a biased coin, and asks them "what is the probability that the coin will come up heads on the first flip?" The frequentist says that for the coin to be biased means for it not have a 50% chance of coming up heads, so all we know is that it has a probability that is not equal 50%. The Bayesain says that that any evidence I have for it coming up heads, is also evidence for it coming up tails, since I know nothing about one outcome, that doesn't hold for its negation, and the only value which represents that symmetry is 50%.
I ask you. What is the difference between these two, and the poor souls engaged in endless debate over realism about sound in the beginning of Making Beliefs Pay Rent?
If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."
One is being asked: "Are there pressure waves in the air if we aren't around?" the other is being asked: "Are there auditory experiences if we are not around?" The problem is that "sound" is being used to stand for both "auditory experience" and "pressure waves through air". They are both giving the right answers to these respective questions. But they are failing to Replace the Symbol with the Substance and they're using one word with two different meanings in different places. In the exact same way, "probability" is being used to stand for both "frequency of occurrence" and "rational degree of belief" in the dispute between the Bayesian and the frequentist. The correct answer to the question: "If the coin is flipped an infinite amount of times, how frequently would we expect to see a coin that landed on heads?" is "All we know, is that it wouldn't be 50%." because that is what it means for the coin to be biased. The correct answer to the question: "What is the optimal degree of belief that we should assign to the first trial being heads?" is "Precisely 50%.", because of the symmetrical evidential support the results get from our background information. How we should actually model the situation as statisticians depends on our goal. But remember that Bayesianism is the stronger magic, and the only contender for perfection in the competition.
For us formalists, probabilities are not anywhere. We do not even believe in probability technically, we only believe in probability theory. The only coherent uses of "probability" in natural language are purely syncategorematic. We should be very careful when we colloquially use "probability" as a noun or verb, and be very careful and clear about what we mean by this word play. Probability theory models many things, including degree of belief, and frequency. Whatever we may learn about rationality, frequency, measure, or any of the other mechanisms that probability models, through the interpretation of probability theorems, we learn because probability theory is isomorphic to those mechanisms. When you use the copula like the frequentist or the subjective Bayesian, it makes it hard to notice that probability theory modeling both frequency and degree of belief, is not a contradiction. If we use "is" instead of "model", it is clear that frequency is not degree of belief, so if probability is belief, then it is not frequency. Though frequency is not degree of belief, frequency does model degree of belief, so if probability models frequency, it must also model degree of belief.
View more: Next