## Reductionist research strategies and their biases

I read an extract of (Wimsatt 1980) [1] which includes a list of common biases in reductionist research. I suppose most of us are reductionists most of the time, so these may be worth looking at.

*This is not an attack on reductionism!* If you think reductionism is too sacred for such treatment, you've got a bigger problem than anything on this list.

Here's Wimsatt's list, with some additions from the parts of his 2007 book Re-engineering Philosophy for Limited Beings that I can see on Google books. His lists often lack specific examples, so I came up with my own examples and inserted them in [brackets].

## Solomonoff Cartesianism

**Followup to**: Bridge Collapse; An Intuitive Explanation of Solomonoff Induction; Reductionism

**Summary**: If you want to predict arbitrary computable patterns of data, Solomonoff induction is the optimal way to go about it — provided that you're an eternal transcendent hypercomputer. A real-world AGI, however, won't be immortal and unchanging. It will need to form hypotheses about its own physical state, including predictions about possible upgrades or damage to its hardware; and it will need bridge hypotheses linking its hardware states to its software states. As such, the project of building an AGI demands that we come up with a new formalism for constructing (and allocating prior probabilities to) hypotheses. It will not involve just building increasingly good computable approximations of AIXI.

**Solomonoff induction** has been cited repeatedly as the theoretical gold standard for predicting computable sequences of observations.^{1} As Hutter, Legg, and Vitanyi (2007) put it:

Solomonoff's inductive inference system will learn to correctly predict any computable sequence with only the absolute minimum amount of data. It would thus, in some sense, be the perfect universal prediction algorithm, if only it were computable.

Perhaps you've been handed the beginning of a sequence like 1, 2, 4, 8… and you want to predict what the next number will be. Perhaps you've paused a movie, and are trying to guess what the next frame will look like. Or perhaps you've read the first half of an article on the Algerian Civil War, and you want to know how likely it is that the second half describes a decrease in GDP. Since all of the information in these scenarios can be represented as patterns of numbers, they can all be treated as rule-governed sequences like the 1, 2, 4, 8… case. Complicated sequences, but sequences all the same.

It's been argued that in all of these cases, one unique idealization predicts what comes next better than any computable method: Solomonoff induction. No matter how limited your knowledge is, or how wide the space of computable rules that could be responsible for your observations, the ideal answer is always the same: Solomonoff induction.

Solomonoff induction has only a few components. It has one free parameter, a choice of universal Turing machine. Once we specify a Turing machine, that gives us a fixed encoding for the set of all possible programs that print a sequence of 0s and 1s. Since every program has a specification, we call the number of bits in the program's specification its "complexity"; the shorter the program's code, the simpler we say it is.

Solomonoff induction takes this infinitely large bundle of programs and assigns each one a prior probability proportional to its simplicity. Every time the program requires one more bit, its prior probability goes down by a factor of 2, since there are then twice as many possible computer programs that complicated. This ensures the sum over all programs' prior probabilities equals 1, even though the number of programs is infinite.^{2}

## Bridge Collapse: Reductionism as Engineering Problem

**Followup to**: Building Phenomenological Bridges

**Summary**: AI theorists often use models in which agents are crisply separated from their environments. This simplifying assumption can be useful, but it leads to trouble when we build machines that presuppose it. A machine that believes it can only interact with its environment in a narrow, fixed set of ways will not understand the value, or the dangers, of self-modification. By analogy with Descartes' mind/body dualism, I refer to agent/environment dualism as *Cartesianism*. The open problem in Friendly AI (OPFAI) I'm calling naturalized induction is the project of replacing Cartesian approaches to scientific induction with reductive, physicalistic ones.

I'll begin with a story about a storyteller.

Once upon a time — specifically, 1976 — there was an AI named TALE-SPIN. This AI told stories by inferring how characters would respond to problems from background knowledge about the characters' traits. One day, TALE-SPIN constructed a most peculiar tale.

Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.

Since Henry fell in the river near his friend Bill, TALE-SPIN concluded that Bill rescued Henry. But for Henry to fall in the river, gravity must have pulled Henry. Which means gravity must have been in the river. TALE-SPIN had never been told that gravity knows how to swim; and TALE-SPIN had never been told that gravity has any friends. So gravity drowned.

TALE-SPIN had previously been programmed to understand involuntary motion in the case of characters being pulled or carried by other characters — like Bill rescuing Henry. So it was programmed to understand 'character X fell to place Y' as 'gravity moves X to Y', as though gravity were a character in the story.^{1}

For us, the hypothesis 'gravity drowned' has low prior probability because we know gravity isn't the *type *of thing that swims or breathes or makes friends. We want agents to seriously consider whether the law of gravity pulls down rocks; we don't want agents to seriously consider whether the law of gravity pulls down the law of electromagnetism. We may not want an AI to assign *zero *probability to 'gravity drowned', but we at least want it to neglect the possibility as Ridiculous-By-Default.

When we introduce deep type distinctions, however, we also introduce new ways our stories can fail.

## Building Phenomenological Bridges

**Naturalized induction** is an open problem in Friendly Artificial Intelligence (OPFAI). The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.

The problem's roots lie in algorithmic information theory and formal epistemology, but finding answers will require us to wade into debates on everything from theoretical physics to anthropic reasoning and self-reference. This post will lay the groundwork for a sequence of posts (titled '**Artificial Naturalism**') introducing different aspects of this OPFAI.

## AI perception and belief: A toy model

A more concrete problem: Construct an algorithm that, given a sequence of the colors cyan, magenta, and yellow, predicts the next colored field.

*Colors: CYYM CYYY CYCM CYYY ????*

This is an instance of the general problem 'From an incomplete data series, how can a reasoner best make predictions about future data?'. In practice, any agent that acquires information from its environment and makes predictions about what's coming next will need to have two map-like^{1} subprocesses:

1. Something that generates the agent's predictions, its expectations. By analogy with human scientists, we can call this prediction-generator the agent's **hypotheses **or **beliefs**.

2. Something that transmits new information to the agent's prediction-generator so that its hypotheses can be updated. Employing another anthropomorphic analogy, we can call this process the agent's **data** or **perceptions**.

## Reality is weirdly normal

**Related to: **When Anthropomorphism Became Stupid, Reductionism, How to Convince Me That 2 + 2 = 3

"**Reality is normal.**" That is: Surprise, confusion, and mystery are features of maps, not of territories. If you would think like reality, cultivate outrage at yourself for failing to intuit the data, not resentment at the data for being counter-intuitive.

"**Not one unusual thing has ever happened.**" That is: Ours is a tight-knit and monochrome country. The cosmos is simple, tidy, lawful. "[T]here is no surprise from a causal viewpoint — no disruption of the physical order of the universe."

"**It all adds up to normality.**" That is: Whatever is true of fundamental reality does not exist in a separate universe from our everyday activities. It *composes* those activities. The perfected description of our universe must in principle allow us to reproduce the appearances we started with.

These maxims are remedies to magical mereology, anthropocentrism, and all manner of philosophical panic. But reading too much (or too little) into them can lead seekers from the Path. For instance, they may be wrongly taken to mean that the world is obliged to validate our initial impressions or our untrained intuitions. As a further corrective, I suggest: **R****eality is weirdly normal**. It's "normal" in odd ways, by strange means, in surprising senses.

At the risk of vivisecting poetry, and maybe of stating the obvious, I'll point out that the maxims mean different things by "normal". In the first two, what's "normal" or "usual" is the universe taken on its own terms — the cosmos as it sees itself, or as an ideally calibrated demon would see it. In the third maxim, what's "normal" is the universe *humanity* perceives — though this still doesn't identify normality with what's *believed* or *expected*. Actually, it will take some philosophical work to articulate just what Egan's "normality" should amount to. I'll start with Copernicanism and reductionism, and then I'll revisit that question.

## Reductionism sequence now available in audio format

The sequence "Reductionism", which includes the subsequences "Joy in the Merely Real" and "Zombies", is now available as a professionally read podcast.

Thanks to those who've been listening, let us know how your experience has been thus far and what you think of the service by dropping an email to support@castify.co.

## Second major sequence now available in audio format

The sequence "A Human's Guide to Words" is now available as a professionally read podcast.

We have started working on the large "Reductionism" sequence which includes both the "Joy in the Merely Real" and the "Zombies" sub-sequences. They should be available in a couple of weeks.

## (Subjective Bayesianism vs. Frequentism) VS. Formalism

One of the core aims of the philosophy of probability is to explain the relationship between frequency and probability. The frequentist proposes identity as the relationship. This use of identity is highly dubious. We know how to check for identity between numbers, or even how to check for the weaker copula relation between particular objects; but how would we test the identity of frequency and probability? It is not immediately obvious that there is some simple value out there which is modeled by probability, like position and mass are values that are modeled by Newton's Principia. You can actually check if density * volume = mass, by taking separate measurements of mass, density and volume, but what would you measure to check a frequency against a probability?

There are certain appeals to frequentest philosophy: we would like to say that if a bag has 100 balls in it, only 1 of which is white, then the probability of drawing the white ball is 1/100, and that if we take a non-white ball out, the probability of drawing the white ball is now 1/99. Frequentism would make the philosophical justification of that inference trivial. But of course, anything a frequentist can do, a Bayesian can do (better). I mean that literally: it's the stronger magic.

A Subjective Bayesian, more or less, says that the reason frequencies are related to probabilities is because when you learn a frequency you thereby learn a fact about the world, and one must update one's degrees of belief on every available fact. The subjective Bayesian actually uses the copula in another strange way:

Probability is subjective degree of belief.

and subjective Bayesians also claim:

Probabilities are not in the world, they are in your mind.

These two statements are brilliantly championed in Probability is Subjectively Objective. But ultimately, the formalism which I would like to suggest denies both of these statements. Formalists do not ontologically commit themselves to probabilities, just as they do not say that numbers exist; hence we don't allocate probabilities in the mind or anywhere else; we only commit ourselves to number theory, and probability theory. Mathematical theories are simply repeatable processes which construct certain sequences of squiggles called "theorems", by changing the squiggles of other theorems, according to certain rules called "inferences". Inferences always take as input certain sequences of squiggles called premises, and output a sequence of squiggles called the conclusion. The only thing an inference ever does is add squiggles to a theorem, take away squiggles from a theorem, or both. It turns out that these squiggle sequences mixed with inferences can talk about almost anything, certainly any computable thing. The formalist does not need to ontologically commit to numbers to assert that "There is a prime greater than 10000.", even though "There is x such that" is a flat assertion of existence; because for the formalist "There is a prime greater than 10000." simply means that number theory contains a theorem which is interpreted as "there is a prime greater than 10000." When you say a mathematical fact in English, you are interpreting a theorem from a formal theory. If under your suggested interpretation, all of the theorems of the theory are true, then whatever system/mechanism your interpretation of the theory talks about, is said to be modeled by the theory.

So, what is the relation between frequency and probability proposed by formalism? Theorems of probability, may be interpreted as true statements about frequencies, when you assign certain squiggles certain words and claim the resulting natural language sentence. Or for short we can say: "Probability theory models frequency." It is trivial to show that Komolgorov models frequency, since it also models fractions; it is an algebra after all. More interestingly, probability theory models rational distributions of subjective degree of believe, and the optimal updating of degree of believe given new information. This is somewhat harder to show; dutch-book arguments do nicely to at least provide some intuitive understanding of the relation between degree of belief, betting, and probability, but there is still work to be done here. If Bayesian probability theory really does model rational belief, which many believe it does, then that is likely the most interesting thing we are ever going to be able to model with probability. But probability theory also models spatial measurement? Why not add the position that probability **is** volume to the debating lines of the philosophy of probability?

Why are frequentism's and subjective Bayesianism's misuses of the copula not as obvious as *volumeism's*? This is because what the Bayesian and frequentest are really arguing about is statistical methodology, they've just disguised the argument as an argument about *what probability is.* Your interpretation of probability theory will determine how you model uncertainty, and hence determine your statistical methodology. Volumeism cannot handle uncertainty in any obvious way; however, the Bayesian and frequentest interpretations of probability theory, imply two radically different ways of handling uncertainty.

The easiest way to understand the philosophical dispute between the frequentist and the subjective Bayesian is to look at the classic biased coin:

A subjective Bayesian and a frequentist are at a bar, and the bartender (being rather bored) tells the two that he has a biased coin, and asks them "what is the probability that the coin will come up heads on the first flip?" The frequentist says that for the coin to be biased means for it not have a 50% chance of coming up heads, so all we know is that it has a probability that is not equal 50%. The Bayesain says that that any evidence I have for it coming up heads, is also evidence for it coming up tails, since I know nothing about one outcome, that doesn't hold for its negation, and the only value which represents that symmetry is 50%.

I ask you. What is the difference between these two, and the poor souls engaged in endless debate over realism about sound in the beginning of Making Beliefs Pay Rent?

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

One is being asked: "Are there pressure waves in the air if we aren't around?" the other is being asked: "Are there auditory experiences if we are not around?" The problem is that "sound" is being used to stand for both "auditory experience" and "pressure waves through air". They are both giving the right answers to these respective questions. But they are failing to Replace the Symbol with the Substance and they're using one word with two different meanings in different places. In the exact same way, "probability" is being used to stand for both "frequency of occurrence" and "rational degree of belief" in the dispute between the Bayesian and the frequentist. The correct answer to the question: "If the coin is flipped an infinite amount of times, how frequently would we expect to see a coin that landed on heads?" is "All we know, is that it wouldn't be 50%." because that is what it means for the coin to be biased. The correct answer to the question: "What is the optimal degree of belief that we should assign to the first trial being heads?" is "Precisely 50%.", because of the symmetrical evidential support the results get from our background information. How we should actually model the situation as statisticians depends on our goal. But remember that Bayesianism is the stronger magic, and the only contender for perfection in the competition.

For us formalists, probabilities are not anywhere. We do not even believe in probability technically, we only believe in probability theory. The only coherent uses of "probability" in natural language are purely syncategorematic. We should be very careful when we colloquially use "probability" as a noun or verb, and be very careful and clear about what we mean by this word play. Probability theory models many things, including degree of belief, and frequency. Whatever we may learn about rationality, frequency, measure, or any of the other mechanisms that probability models, through the interpretation of probability theorems, we learn because probability theory is *isomorphic* to those mechanisms. When you use the copula like the frequentist or the subjective Bayesian, it makes it hard to notice that probability theory modeling both frequency and degree of belief, is not a contradiction. If we use "is" instead of "model", it is clear that frequency is not degree of belief, so if probability is belief, then it is not frequency. Though frequency is not degree of belief, frequency does model degree of belief, so if probability models frequency, it must also model degree of belief.

## Remind Physicalists They're Physicalists

Weisberg et al. (2008) presented subjects with two explanations for psychological phenomena (e.g. attentional blink). Some subjects got the regular explanation, and other subjects got the 'with neuroscience' explanation that included purposely irrelevant verbiage saying that "brain scans indicate" some part of the brain already known to be involved in that psychological process caused the process to occur.

And yet, Yale cognitive science students rated the 'with neuroscience' explanations as more satisfying than the regular explanations.

Why? The purposely irrelevant neuroscience verbiage could only be important to the explanation if somebody thought that perhaps it's *not the brain* that was producing certain psychological phenomena. But these are Yale cognitive science students. Somehow I suspect people who chose to study cognition as information processing are less likely than average to believe the mind runs on magic. But then, why would they be additionally persuaded by information suggesting only that the brain causes psychological phenomena?

In another study, McCabe & Castel (2008) showed subjects fictional articles summarizing scientific results and including either no image, a brain scan image, or a bar graph. Subjects were asked to rate the soundness of scientific reasoning in the article, and they gave the highest ratings when the article included a brain scan image. But why should this be?

## Towards a New Decision Theory for Parallel Agents

A recent post: Consistently Inconsistent, raises some problems with the unitary view of the mind/brain, and presents the modular view of the mind as an alternate hypothesis. The parallel/modular view of the brain not only deals better with the apparent hypocritical and contradictory ways our desires, behaviors, and believes seem to work, but also makes many successful empirical predictions, as well as postdictions. Much of that work can be found in Dennett's 1991 book: "Consciousness Explained" which details both the empirical evidence against the unitary view, and the *intuition-fails *involved in retaining a unitary view after being presented with that evidence.

The aim of this post is not to present further evidence in favor of the parallel view, nor to hammer any more nails in the the unitary view's coffin; the scientific and philosophical communities have done well enough in both departments to discard the intuitive hypothesis that there is some *executive of the mind *keeping things orderly. The dilemma I wish to raise is a question: "How should we update our decision theories to deal with independent, and sometimes inconsistent, desires and believes being had by *one agent*?"

If we model one agent's *desires *by using one utility function, and this function orders the outcomes the agent can reach* *on one real axis, then it seems like we might be falling back into the intuitive view that there is some *me *in there with one definitive list of preferences. The picture given to us by Marvin Mimsky and Dennett involves a bunch of individually dumb agents, each with a unique set of specialized *abilities* and *desires, *interacting in such a way so as to produce one smart agent, with a diverse set of abilities and desires, but the smart agent only apears when viewed from the right *level of description. *For convenience, we will call those dumb-specialized agents "subagents", and the smart-diverse agent that emerges from their interaction "the smart agent". When one considers what it would be useful for a seeing-neural-unit to *want* to do, and contrasts it with what it would be useful for a *get that food*-neural-unit to want to do, e.g., examine that prey longer v.s. charge that prey, turn head v.s. keep running forward, stay attentive v.s. eat that food, etc. it becomes clear that cleverly managing which unit gets to have how much control, and when, is an essential part of the decision making process of the whole. Decision theory, as far as I can tell, does not model any part of that managing process; instead we treat the smart agent as having its own set of desires, and don't discuss how the subagents' goals are being managed to produce that global set of desires.

It is possible that the many subagents in a brain act *isomorphically *to an agent with one utility function and a unique problem space, when they operate in concert. A trivial example of such an agent might have only two subagents "A" and "B", and possible outcomes O_{1} through O_{n}. We can plot the utilities that each subagent gives to these outcomes on a two dimensional positive Cartesian graph; A's assigned utilities being represented by position in X, and B's utilities by position in Y. The method by which these subagents are managed to produce behavior might just be: go for the possible outcome furthest from (0,0); in, which case, the utility function of the whole agent U(O_{x}) would just be the distance from (0,0) to (A's U(O_{x}) , B's U(O_{x})).

An agent which manages its subagents so as to be *isomorphic* to one utility function on one problem space is certainly mathematically describable, but also implausible. It is unlikely that the actual physical-neural subagents in a brain deal with the same problem spaces, i.e., they each have their own unique set of O_{1 }through O_{n}. It is not as if all the subagents are playing the same game, but each has a unique goal within that game – they each have their own unique set of *legal moves* too. This makes it problematic to model the global utility function of the smart agent as assigning one real number to every member of a set of possible outcomes, since there is no one set of possible outcomes for the smart agent as a whole. Each subagent has its own search space with its own format of representation for that problem space. The problem space and utility function of the smart agent are implicit in the interactions of the subagents; they emerge from the interactions of agents on a lower level; the smart agents utility function and problem space are never explicitly *written down*.

A useful example is smokers that are quitting. Some part of their brains that can do complicated predictions doesn't *want* its body to smoke. This part of their brain *wants* to avoid death, i.e., will avoid death if it can, and *knows *that choosing the possible outcome of smoking puts its body at high risk for death. Another part of their brains *wants* nicotine, and *knows* that choosing the move of smoking gets it nicotine. The nicotine craving subagent doesn't *want* to die, it also doesn't *want* to stay alive, these outcomes aren't in the domain of the nicotine-subagent's utility function at all. The part of the brain responsible for predicting its bodies death if it continues to smoke, probably isn't significantly rewarded by nicotine in a parallel manner. If a cigarette is around and offered to the smart agent, these subagents must compete for control of the relevant parts of their body, e.g., nicotine-subagent might set off a global craving, while predict-the-future-subagent might set off a vocal response saying "no thanks, I'm quitting." The overall desire to smoke or not smoke of the smart agent is just the result of this competition. Similar examples can be made with different desires, like a desire to over eat and a desire to look slim, or the desire to stay seated and the desire to eat a warm meal.

We may call the algorithm which settles these internal power struggles the "managing algorithm", and we may call a decision theory which models managing algorithms a "parallel decision theory". It's not the businesses of decision theorists to discover the specifics of the human managing process, that's the business of empirical science. But certain parts of the human managing algorithm can be reasonably decided on. It is very unlikely that our managing algorithm is utilitarian for example, i.e., the smart agent doesn't do whatever gets the highest net utility for its subagents. Some subagents are more powerful than others; they have a higher prior chance of success than their competitors; some others are weak in a parallel fashion. The question of what counts as one subagent in the brain is another empirical question which is not the business of decision theorists either, but anything that we do consider a subagent in a parallel theory must solve its problem in the form of a CSA, i.e., it must internally represent its outcomes, know what outcomes it can get to from whatever outcome it is at, and assign a utility to each outcome. There are likely many neural units that fit that description in the brain. Many of them probably contain as parts *subsubagnets *which also fit this description, but eventually, if you divide the parts enough, you get to neurons which are not CSAs, and thus not subagents.

If we want to understand how we make decisions, we should try to model a CSA, which is *made* out of more spcialized sub-CSAs competing and agreeing, which are made out of further specialized sub-sub-CSAs competing and agreeing, which are made out of, etc. which are made out of non-CSA algorithms. If we don't understand that, we don't understand how brains make decisions.

I hope that the considerations above are enough to convince reductionists that we should develop a parallel decision theory if we want to reduce decision making to computing. I would like to add an axiomatic parallel decision theory to the LW arsenal, but I know that that is not a one man/woman job. So, if you think you might be of help in that endeavor, and are willing to devote yourself to some degree, please contact me at hastwoarms@gmail.com. Any team we assemble will likely not meet in person often, and will hopefully frequently meet on some private forum. We will need decision theorists, general mathematicians, people intimately familiar with the modular theory of mind, and people familiar with neural modeling. What follows are some suggestions for any team or individual that might pursue that goal independently:

- The specifics of the managing algorithm used in brains are mostly unknown. As such, any parallel decision theory should be built to handle as diverse a range of managing algorithms as possible.
- No composite agent should have any property that is not reducible to the interactions of the agents it is
*made*out of. If you have a complete description of the subagents, and a complete description of the managing algorithm, you have a complete description of the smart agent. - There is nothing wrong with treating the
*lowest level*of CSAs as black boxes. The specifics of the non-CSA algorithms, which the lowest level CSAs are made out of are not relevant to parallel decision theory. - Make sure that the theory can handle each subagent having its own unique set of possible outcomes, and its own unique method of representing those outcomes.
- Make sure that each CSA above the lowest level actually has "could", "should", and "would" labels on the nodes in its problem space, and make sure that those labels, their values, and the problem space itself can be reduced to the managing of the CSAs on the level below.
- Each level above the lowest should have CSAs dealing with more a more diverse range of problems than the ones on the level bellow. The lowest level should have the most specialized CSAs.
- If you've achieved the six goals above, try comparing your parallel decision theory to other decision theories; see how much predictive accuracy is gained by using a parallel decision theory instead of the classical theories.

View more: Next