Reason as memetic immune disorder
A prophet is without dishonor in his hometown
I'm reading the book "The Year of Living Biblically," by A.J. Acobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that
- a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
- this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.
You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.
I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...
How do we explain the blindness of people to a religion they grew up with?
Why I Reject the Correspondence Theory of Truth
This post began life as a comment responding to Peer Gynt's request for a steelman of non-correspondence views of truth. It ended up being far too long for a comment, so I've decided to make it a separate post. However, it might have the rambly quality of a long comment rather than a fully planned out post.
Evaluating Models
Let's say I'm presented with a model and I'm wondering whether I should incorporate it into my belief-set. There are several different ways I could go about evaluating the model, but for now let's focus on two. The first is pragmatic. I could ask how useful the model would be for achieving my goals. Of course, this criterion of evaluation depends crucially on what my goals actually are. It must also take into account several other factors, including my cognitive abilities (perhaps I am better at working with visual rather than verbal models) and the effectiveness of alternative models available to me. So if my job is designing cannons, perhaps Newtonian mechanics is a better model than relativity, since the calculations are easier and there is no significant difference in the efficacy of the technology I would create using either model correctly. On the other hand, if my job is designing GPS systems, relativity might be a better model, with the increased difficulty of calculations being compensated by a significant improvement in effectiveness. If I design both cannons and GPS systems, then which model is better will vary with context.
Another mode of evaluation is correspondence with reality, the extent to which the model accurately represents its domain. In this case, you don't have much of the context-sensitivity that's associated with pragmatic evaluation. Newtonian mechanics may be more effective than the theory of relativity at achieving certain goals, but (conventional wisdom says) relativity is nonetheless a more accurate representation of the world. If the cannon maker believes in Newtonian mechanics, his beliefs don't correspond with the world as well as they should. According to correspondence theorists, it is this mode of evaluation that is relevant when we're interested in truth. We want to know how well a model mimics reality, not how useful it is.
I'm sure most correspondence theorists would say that the usefulness of a model is linked to its truth. One major reason why certain models work better than others is that they are better representations of the territory. But these two motivations can come apart. It may be the case that in certain contexts a less accurate theory is more useful or effective for achieving certain goals than a more accurate theory. So, according to a correspondence theorist, figuring out which model is most effective in a given context is not the same thing as figuring out which model is true.
How do we go about these two modes of evaluation? Well, evaluation of the pragmatic success of a model is pretty easy. Say I want to figure out which of several models will best serve the purpose of keeping me alive for the next 30 days. I can randomly divide my army of graduate students into several groups, force each group to behave according to the dictates of a separate model, and then check which group has the highest number of survivors after 30 days. Something like that, at least.
But how do I evaluate whether a model corresponds with reality? The first step would presumable involve establishing correspondences between parts of my model and parts of the world. For example, I could say "Let mS in my model represent the mass of the Sun." Then I check to see if the structural relations between the bits of my model match the structural relations between the corresponding bits of the world. Sounds simple enough, right? Not so fast! The procedure described above relies on being able to establish (either by stipulation or discovery) relations between the model and reality. That presupposes that we have access to both the model and to reality, in order to correlate the two. In what sense do we have "access" to reality, though? How do I directly correlate a piece of reality with a piece of my model?
Models and Reality
Our access to the external world is entirely mediated by models, either models that we consciously construct (like quantum field theory) or models that our brains build unconsciously (like the model of my immediate environment produced in my visual cortex). There is no such thing as pure, unmediated, model-free access to reality. But we often do talk about comparing our models to reality. What's going on here? Wouldn't such a comparison require us to have access to reality independent of the models? Well, if you think about it, whenever we claim to be comparing a model to reality, we're really comparing one model to another model. It's just that we're treating the second model as transparent, as an uncontroversial proxy for reality in that context. Those last three words matter: A model that is used as a criterion for reality in one investigative context might be regarded as controversial -- as explicitly a model of reality rather than reality itself -- in another context.
Let's say I'm comparing a drawing of a person to the actual person. When I say things like "The drawing has a scar on the left side of the face, but in reality the scar is on the right side", I'm using the deliverances of visual perception as my criterion for "reality". But in another context, say if I'm talking about the psychology of perception, I'd talk about my perceptual model as compared (and, therefore, contrasted) to reality. In this case my criterion for reality will be something other than perception, say the readings from some sort of scientific instrument. So we could say things like, "Subjects perceive these two colors as the same, but in reality they are not." But by "reality" here we mean something like "the model of the system generated by instruments that measure surface reflectance properties, which in turn are built based on widely accepted scientific models of optical phenomena".
When we ordinarily talk about correspondence between models and reality, we're really talking about the correspondence between bits of one model and bits of another model. The correspondence theory of truth, however, describes truth as a correspondence relation between a model and the world itself. Not another model of the world, the world. And that, I contend, is impossible. We do not have direct access to the world. When I say "Let mS represent the mass of the Sun", what I'm really doing is correlating a mathematical model with a verbal model, not with immediate reality. Even if someone asks me "What's the Sun?", and I point at the big light in the sky, all I'm doing is correlating a verbal model with my visual model (a visual model which I'm fairly confident is extremely similar, though not exactly the same, as the visual model of my interlocutor). Describing correspondence as a relationship between models and the world, rather than a relationship between models and other models, is a category error.
So I can go about the procedure of establishing correspondences all I want, correlating one model with another. All this will ultimately get me is coherence. If all my models correspond with one another, then I know that there is no conflict between my different models. My theoretical model coheres with my visual model, which coheres with my auditory model, and so on. Some philosophers have been content to rest here, deciding that coherence is all there is to truth. If the deliverances of my scientific models match up with the deliverances of my perceptual models perfectly, I can say they are true. But there is something very unsatisfactory about this stance. The world has just disappeared. Truth, if it is anything at all, involves both our models and the world. However, the world doesn't feature in the coherence conception of truth. I could be floating in a void, hallucinating various models that happen to cohere with one another perfectly, and I would have attained the truth. That can't be right.
Correspondence Can't Be Causal
The correspondence theorist may object that I've stacked the deck by requiring that one consciously establish correlations between models and the world. The correspondence isn't a product of stipulation or discovery, it's a product of basic causal connections between the world and my brain. This seems to be Eliezer's view. Correspondence relations are causal relations. My model of the Sun corresponds with the behavior of the actual Sun, out there in the real world, because my model was produced by causal interactions between the actual Sun and my brain.
But I don't think this maneuver can save the correspondence theory. The correspondence theory bases truth on a representational relationship between models/beliefs and the world. A model is true if it accurately represents its domain. Representation is a normative relationship. Causation is not. What I mean by this is that representation has correctness conditions. You can meaningfully say "That's a good representation" or "That's a bad representation". There is no analog with causation. There's no sense in which some particular putatively causal relation ends up being a "bad" causal relation. Ptolemy's beliefs about the Sun's motion were causally entangled with the Sun, yet we don't want to say that those beliefs are accurate. It seems mere causal entanglement is insufficient. We need to distinguish between the right sort of causal entanglement (the sort that gets you an accurate picture of the world) and the wrong sort. But figuring out this distinction takes us back to the original problem. If we only have immediate access to models, on what basis can we decide whether our models are caused by the world in a manner that produces an accurate picture. To determine this, it seems we again need unmediated access to the world.
Back to Pragmatism
Ultimately, it seems to me the only clear criterion the correspondence theorist can establish for correlating the model with the world is actual empirical success. Use the model and see if it works for you, if it helps you attain your goals. But this is exactly the same as the pragmatic mode of evaluation which I described above. And the representational mode of evaluation is supposed to differ from this.
The correspondence theorist could say that pragmatic success is a proxy for representational success. Not a perfect proxy, but good enough. The response is, "How do you know?" If you have no independent means of determining representational success, if you have no means of calibration, how can you possibly determine whether or not pragmatic success is a good proxy for representational success? I mean, I guess you can just assert that a model that is extremely pragmatically successful for a wide range of goals also corresponds well with reality, but how does that assertion help your theory of truth? It seems otiose. Better to just associate truth with pragmatic success itself, rather than adding the unjustifiable assertion to rescue the correspondence theory.
So yeah, ultimately I think the second of the two means of evaluating models I described at the beginning (correspondence) can only really establish coherence between your various models, not coherence between your models and the world. Since that sort of evaluation is not world-involving, it is not the correct account of truth. Pragmatic evaluation, on the other hand, *is* world-involving. You're testing your models against the world, seeing how effective they are at helping you accomplish your goal. That is the appropriate normative relationship between your beliefs and the world, so if anything deserves to be called "truth", it's pragmatic success, not correspondence.
This has consequences for our conception of what "reality" is. If you're a correspondence theorist, you think reality must have some form of structural similarity to our beliefs. Without some similarity in structure (or at least potential similarity) it's hard to say how one meaningfully could talk about beliefs representing reality or corresponding to reality. Pragmatism, on the other hand, has a much thinner conception of reality. The real world, on the pragmatic conception is just an external constraint on the efficacy of our models. We try to achieve certain goals using our models and something pushes back, stymieing our efforts. Then we need to build improved models in order to counteract this resistance. Bare unconceptualized reality, on this view, is not a highly structured field whose structure we are trying to grasp. It is a brute, basic constraint on effective action.
It turns out that working around this constraint requires us to build complex models -- scientific models, perceptual models, and more. These models become proxies for reality, and we treat various models as "transparent", as giving us a direct view of reality, in various contexts. This is a useful tool for dealing with the constraints offered by reality. The models are highly structured, so in many contexts it makes sense to talk about reality as highly structured, and to talk about our other models matching reality. But it is also important to realize that when we say "reality" in those contexts, we are really talking about some model, and in other contexts that model need not be treated as transparent. Not realizing this is an instance of the mind projection fallacy. If you want a context-independent, model-independent notion of reality, I think you can say no more about it than "a constraint on our models' efficacy".
That sort of reality is not something you represent (since representation assumes structural similarity), it's something you work around. Our models don't mimic that reality, they are tools we use to facilitate effective action under the constraints posed by reality. All of this, as I said at the beginning, is goal and context dependent, unlike the purported correspondence theory mode of evaluating models. That may not be satisfactory, but I think it's the best we have. Pragmatist theory of truth for the win.
Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"
Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21
"All natural food" as an constrained optimisation problem
A look at all natural foods through the lenses of Bayesianism, optimisation, and friendly utility functions.
How should we consider foods that claim to be "all natural"? Or, since that claim is a cheap signal, foods that have few ingredients, all of them easy to recognise and all "natural"? Or "GM free"?
From the logical point of view, the case is clear: valuing these foods is nothing more that the appeal to nature fallacy. Natural products include many pernicious things (such as tobacco, hemlock, belladonna, countless parasites, etc...). And the difference between natural and not-natural isn't obvious: synthetic vitamin C is identical to the "natural" molecule, and gene modifications are just advanced forms of selective breeding.
But we're not just logicians, we're Bayesians. So let's make a few prior assumptions:
- There are far more possible products in the universe that are bad to eat than are good.
- Products that humans have been consuming for generations are much more likely to be good to eat that than a random product.
Now let's see the food industry as optimising along a few axis:
- Cost. This should be low.
- Immediate consumer satisfaction (including taste, appearance, texture, general well-being for a week or so). This should be high.
- Long term damage to the consumer's health. This should be low.
Blue or Green on Regulation?
In recent posts, I have predicted that, if not otherwise prevented from doing so, some people will behave stupidly and suffer the consequences: "If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold." People misinterpret this as indicating that I take a policy stance in favor of regulation. It indicates no such thing. It is meant purely as guess about empirical consequences - a testable prediction on a question of simple fact.
Perhaps I would be less misinterpreted if I also told "the other side of the story" - inveighed at length about the reasons why bureaucrats are not perfect rationalists guarding our net best interests. But ideally, I shouldn't have to go to such lengths. Ideally, I could make a prediction about a strictly factual question without this being interpreted as a policy stance, or as a stance on logically distinct factual questions.
Ritual Report: Schelling Day
On Sunday, April 14th, the Boston group held our first Schelling Day celebration. The idea was to open up and share our private selves. It was a rousing success.
That doesn't do it justice. Let me try again.
By all the stars, you guys. This was beautiful.
About fifteen people showed up. Most of us were from the hard core of Boston's rationalist community. Two of us were new to the group. (I'm hopeful this will convince them to start attending our regular meetups.) There was a brief explanation and a few vital clarifying questions before we began the ritual, which went for maybe 90-120 minutes, including a couple of short breaks. All of us spoke at least once.
I don't want to go into specifics about what people said, but it was powerful. I learned about sides of my friends I would never have guessed at. People went into depth about issues I had only seen from the surface. I heard things that will make me change my behavior towards my friends. I saw angst and guilt and hope and pain and wild joy. I saw compassion and uncertainty and courage. People said things they had never said before, things I might not have been brave enough even to think in their position. I had tears in my eyes more than once.
Speaking went remarkably smoothly. I set a timer for five minutes for each speaker, but it never ran out. (Five minutes is a surprisingly long time.) Partway through, Julia suggested we leave a long moment of silence between speakers, which was a very good idea and I wish I'd done a better job of enforcing it.
Afterwards, we had a potluck and mingled in small groups. At first we talked about our revelations, but over time our conversation started drifting towards our usual topics. Next time, in order to keep us on topic, I'll probably try adding more structure to this stage.
The other area I wanted to improve was the ritual with the snacks. We had five categories: Struggles, Confessions, Hopes, Joys, and Other. There weren't many Hopes, and there wasn't much distinction between Struggles and Confessions. I'll change this for next time, possibly to Hardships, Joys, Histories, and Other. There's room for improvement in the specific snacks I picked, too.
This celebration was the most powerful thing I've experienced since the Solstice megameetup. I don't think I want to do this again soon—it was one of the most exhausting things I've ever done, even if I didn't notice until after I'd left—but I know I want to do it again sometime.
To everyone who came: I'm so proud of what you did and who you are. Thank you for your courage and sincerity.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)