Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Making Beliefs Pay Rent (in Anticipated Experiences)

94 Post author: Eliezer_Yudkowsky 28 July 2007 10:59PM

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don't see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don't experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth's gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock's second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience, can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could oversimply their minds by drawing a little node labeled "Phlogiston", and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance. Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all. But you had better remember the propositional assertion that "Wulky Wilkinsen" has the "post-utopian" attribute, so you can regurgitate it on the upcoming quiz. Likewise if "post-utopians" show "colonial alienation"; if the quiz asks whether Wulky Wilkinsen shows colonial alienation, you'd better answer yes. The beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.  Do you believe that phlogiston is the cause of fire?  Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not "colonial alienation"; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you?  Do you believe that elan vital explains the mysterious aliveness of living beings?  Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.  It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can't find the difference of anticipation, you're probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don't know what experiences are implied by Wulky Wilkinsen being a post-utopian, you can go on arguing forever. (You can also publish papers forever.)

Above all, don't ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "Belief in Belief"

Comments (211)

Sort By: Old
Comment author: Richard_Pointer2 29 July 2007 01:41:37AM -1 points [-]

Great post. As always.

Comment author: michael_vassar3 29 July 2007 04:51:34AM 1 point [-]

I assume that most of math is being ignored for simplicity's sake?

Comment author: Eliezer_Yudkowsky 29 July 2007 05:31:18AM 13 points [-]

What good is math if people don't know what to connect it to?

Comment author: VKS 17 March 2012 03:35:25PM 16 points [-]

All math pays rent.

For all mathematical theorems can be restated in the form:

If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.

Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.

In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■


In practice:

If you demonstrate Pythagoras's Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.

If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.

If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.

If you understand why you can't trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.

and so on and so forth.

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.

Comment author: jirkazr 23 August 2012 02:51:29PM *  4 points [-]

Is it not the purpose of math to tell us "how" to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms.

Math makes it very hard to reason about models incorrectly. That's why it's good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again.

That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.

Comment author: army1987 06 August 2013 12:50:47PM 2 points [-]
Comment author: Pineapple264 22 December 2013 03:48:44AM 0 points [-]

Thats exactly how i felt in high school. Im glad i changed that because it wouldn't be useful to me if i'd never learned algebra. The first part of the class is hard to use and discouraging to new students.

Comment author: michael_vassar3 29 July 2007 06:54:35AM 1 point [-]

In practice, most of the time people figure out what to connect it to later. More precisely, most of it probably doesn't connect to anything, but what does connect to stuff usually isn't found to do so until much later than it is invented/discovered.

Comment author: Vladimir_Nesov2 29 July 2007 10:01:17AM 2 points [-]

Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it's difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can't abolish all such concepts, only resist acquiring them.

Comment author: Robin_Hanson2 29 July 2007 03:00:35PM 1 point [-]

For any instrumental activity, done to achieve some other end, it makes sense to check that specific examples are in fact achieving the intended end.

Most beliefs may have as their end the refinement of personal decisions. For such beliefs it makes sense not only to check whether they effect your personal experience, but also whether they effect any decisions you might make; beliefs could effect experience without mattering for decisions.

On the other hand, some beliefs may have as their end effecting the experiences or decisions of other creatures, such as in the far future. And you may care about effects that are not experienced by any creatures.

Comment author: Michael_Rooney 29 July 2007 06:14:09PM 11 points [-]

Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.

Comment author: Eliezer_Yudkowsky 29 July 2007 06:38:10PM 8 points [-]

Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

Comment author: Perplexed 22 July 2010 03:18:43AM 6 points [-]

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can't do quantum mechanics with kets, but no bras. We can't do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can't do Bayesian updating with observations, but no priors. And I claim that you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.

This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger's main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.

Comment author: timtyler 30 November 2010 08:08:50PM *  0 points [-]

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable.

That seems obviously correct. However, unless you pursue knowledge for its own sake, you should probably not be overly concerned with preserving past truths - unless they are going to impact on future decisions.

Of course, the decisions of a future superintelligence might depend on all kinds of historical minutae that we don't regard as important. So maybe we should preserve those truths we regard as insignificant to us for it. However, today, probably relatively few are enslaved to future superintelligences - and even then, it isn't clear that this is what they would want us to do.

Comment author: mendel 19 May 2011 01:22:14PM *  0 points [-]

An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence.

Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.

Comment author: Manfred 20 June 2011 01:09:12AM 1 point [-]

Hm, yeah. The trouble is how the doctrine handles deductive logic - for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.

Comment author: Nick_Tarleton 31 July 2007 03:03:22AM 9 points [-]

It's amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like 'sound') that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions... how many more?

I think you're being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I've never experienced a college English class; perhaps my innocent fantasies will be shaken then.

Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don't find that view very satisfying, because math seems to so fundamentally underly everything else - mathematical truths can't be changed by changing anything physical, for instance - but it's one way to make math compatible with anticipation.

Comment author: TsviBT 06 March 2012 08:08:06AM *  3 points [-]

I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them

I think Eliezer's point was about the student. "Wulky Wilkinsen is a 'post-utopian'" could be meaningful, if you know what a post-utopian is and is not (I don't, and don't care). The student who learns just the statement, however, has formed a floating belief.

We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become "floaty" and spawn further "floaty" beliefs.

In my sociology class, we talk about how "Man in his natural state has liberty because everyone is equal". "Natural state", "liberty", and "equal" could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance - it's all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare:

"Man in his natural state has liberty because everyone is equal"

"Man in his natural state is equal because everyone has liberty"

"When everyone has liberty and is equal, man is in his natural state"

These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth.

(Edit for minor grammar and formatting)

Comment author: Nick_Tarleton 31 July 2007 03:04:01AM 2 points [-]

It's amazing how many forms of irrationality failure to see the map-territory distinction

Should have been "how many forms of irrationality result from failure...". Sorry.

Comment author: crasshopper 14 March 2008 11:27:35PM 4 points [-]

I agree with those who say it's okay to figure things out later. If my music professor says a certain composer favors the Aeolian mode, I may not be able to visualize that on the spot but who cares? I can remember that statement and think about it later. Likewise with phlogiston, I have a vague concept of what it is and someday the alchemists will discover more precisely what's going on there.

Too much cognitive effort would be spent if, every time I thought about linear algebra, I had to visualize the myriad concrete instances in which it will be applied. I bet thinking in abstractions results in way more economical use of thinking time and thinking-matter.

Comment author: Mark_Probst 07 April 2008 01:37:13PM 2 points [-]

In what way is the belief that beliefs should be grounded not a free-floating belief itself?

Comment author: adamisom 14 April 2012 06:52:13AM 1 point [-]

One way of answering might be to say that there is no separate "belief" that beliefs should be grounded. But i'm not sure.

All I know is that the question annoys me, but I can't quite put my finger on it. It reminds me of questions like (1) the accusation that you can't justify the use of logic logically, or (2) the accusation that tolerance is actually intolerant - because it's intolerant of intolerance. There might be a level distinction that needs to be made here, as in (2) - and maybe in (1) though I think that's different.

Comment author: Danfly 14 April 2012 11:23:45AM 1 point [-]

(1) has come out of my mouth on a few occasions, albeit not in those exact words. It's normally after a few beers and I feel like playing the extreme skeptic a la David Hume, just to annoy everyone. I think the best way around it is to resort to the empirical argument and say that, in our experience, it is always right: Essentially the same thing Yudkowsky does with PA arithmetic here. Trying to find an argument against it which is truly "rationalist" in the continental sense has been a dead end in my experience.

(2) sort of depends on the pragmatics and what "tolerance" actually means to the persons involved in a given context. If you define tolerance as simply being tolerant of other viewpoints, then you can still be tolerant of the intolerant viewpoints. However, if you define it as freedom from bigotry, then that could indeed be called "intolerant" by the standards of the first definition.

I hope I'm making sense here.

Comment author: MarkusRamikin 14 April 2012 07:27:03AM 1 point [-]

I anticipate expressing free-floating beliefs would get me negative karma on Less Wrong.

More seriously:

I do not anticipate free-floating beliefs being useful in the same sense that maps of reality are useful. A map can turn out to be accurate or inaccurate, and insofar as it is accurate it can help me navigate and manipulate reality. My belief that "a proper belief should not be free-floating" prohibits free-floating beliefs from doing any of that.

Or one might as well see it as not a belief, but as a definition. There's BeliefType1 which is grounded in reality, and BeliefType2 which is not, and we happen to call BeliefType1 a "proper belief". (Of course we still do it for a reason, because we care about our sheep, or rather, we care about our beliefs being true and thus useful.)

Not sure which approach makes more sense.

Comment author: Klevador 14 April 2012 09:08:01AM *  0 points [-]

The ability to anticipate experiences is one of our maximands because we have goals that are optimally achieved with this ability. To believe that beliefs should allow us to anticipate experiences is grounded in the desire to achieve our goals.

Comment author: Jan_Kanis 26 May 2008 10:13:08AM 2 points [-]

Mark: Believing that beliefs should be grounded anticipates that there is absolutely no change in anticipation if one were to change these free floating ideas. Of course this doesn't really answer your question because it just restates the definition of 'free floating beliefs' in different words. This belief actually follows from Eliezers belief in Occam's Razor, which predicts that when faced with unexplained events, if one creates a set of theories explaining these events, any predictions made by the simple theories are more likely to actually happen than predictions made by complex theories. I'm not quite sure if Occam's Razor is an axiom of science or just yet another belief. At least there is quite a bit of support for this belief, if you look into the history of science.

Another point: I think phlogiston is a bit of a poor example. Phlogiston actually corresponds very closely with something currently believed as real: phlogiston is the absence of oxygen. Seeing it this way, it's very well possible to build a theory of phlogiston explaining and predicting nearly all observations of fire, e.g. fire releases phlogiston, and if you burn something in a confined space the air gets saturated with phlogiston and cannot take in any more, so the fire goes out. A very important argument in the debate between phlogistionians and oxygants was when experiments were done to measure the weight of phlogiston and oxygen, and phlogiston turned out to have a negative weight.

Comment author: steve_roberts 05 July 2008 03:27:19PM 0 points [-]

Jan: Occam's razor is not so much a rule of science but an operating guideline for doing science. It could be reduced to "test simple theories first". In the past this has been very useful in keeping scientific effort productive, the 'belief' is that it will continue to be useful in this way.

Comment author: Hopefully_Anonymous 05 July 2008 07:05:57PM 0 points [-]

This led to a fun read of "occam's razor" wikipedia entry. Hickum's dictum in particular was a great find (generalized beyond medicine, it could be that explanations for unexplained events can be as complex as they damn well please). As a practical corrective, it seems to me that probability theory suggests that the best accessible explanation to us for unexplained events is in the set of simpler theories, but is probably not one of the absolute simplest.

Comment author: James 29 July 2008 11:22:34PM 0 points [-]

Eliezer once wrote that "We can build up whole networks of beliefs that are connected only to each other - call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict - or better yet, prohibit."

I can't see how nearly all of the beliefs expressed in this post predict or prohibit any experience.

Comment author: DanielLC 28 December 2009 11:54:23PM 1 point [-]

"Alchemists believed that phlogiston caused fire"

How is that different than our current belief that oxygen causes fire?

Comment author: Jack 28 December 2009 11:58:26PM *  -1 points [-]

Uhhh... oxygen exists?

Comment author: DanielLC 29 December 2009 12:40:41AM 1 point [-]

And so does the absence of oxygen, or, as they called it, phlogiston.

Comment author: Nick_Tarleton 29 December 2009 01:01:17AM *  6 points [-]

The absence of oxygen isn't much like a substance whose release is fire:

  • it doesn't have any consistent physical or chemical properties;
  • many things not containing oxygen fail to burn in air, and none burn in vacuum;
  • on the other hand, things do burn under oxidizers other than oxygen;
  • oxidized substances are very poorly modeled by mixtures of the original substance and oxygen;
  • things burned in open air can either gain or lose weight;

etc.

Comment author: DanielLC 31 December 2009 12:41:13AM 1 point [-]

"it doesn't have any consistent physical or chemical properties;"

And oxides do? Or are you referring to pure phlogiston? It's not that big a deal that you can't get pure phlogiston. It's nigh impossible to purify fluorine. I think that under our current understanding of physics, it's totally impossible to isolate a single quark.

It moves because it's attracted to some things more than others. It's still attracted to everything more than itself.

"many things not containing oxygen fail to burn in air"

Hurts both theories equally. Presumably, it's strongly bonded to the phlogiston/it doesn't strongly bond to oxygen.

"...and none burn in vacuum;"

As I said, you can't get pure phlogiston.

"on the other hand, things do burn under oxidizers other than oxygen;"

Hurts both theories equally. The only way to solve it to my knowledge is that there are things that cause fire other than phlogiston/oxygen.

"things burned in open air can either gain or lose weight;"

Hurts both theories equally. Presumably, some of the matter escapes into the air sometimes.

Everything you listed either is only a very minor problem or is exactly as bad for the idea of oxygen.

Comment author: Jack 29 December 2009 05:05:12AM 10 points [-]

You're giving phlogiston qualities no one who held that theory gave it. If you want to call the absence of oxygen phlogiston, okay, but you aren't talking about the same phlogiston everyone else is talking about. Moreover, thinking about fire this way is clumsy and incompatible with the rest of our knowledge about physics and chemistry.

We already had a conception of matter when phlogiston was invented... and phlogiston was understood as a kind of matter. To say the phlogiston is really this other kind of thing, which isn't matter but a particular kind of absence of matter is both unhelpful and a distortion of phlogiston theory. The whole point of the phlogiston theory was that they thought there was a kind of matter responsible for fire! But there isn't matter like that.

Now by defining phlogiston as the absence of oxygen you might be able to model combustion in a narrow set of circumstances-- but you couldn't fit that model with any of your other knowledge about physics and chemistry.

In short neither the original kind nor your kind of phlogiston exist.

Comment author: DanielLC 31 December 2009 12:32:22AM 1 point [-]

It was at one point theorized to have negative mass. If it's matter, and you make everything else weigh more, it works out the same.

I fail to see why you think it can't fit it with other knowledge of physics and chemistry. You can think of electricity as positively charged particles moving around with virtually zero loss of predicting power.

Comment author: Jack 31 December 2009 01:19:21AM 3 points [-]

For example, you can't use phlogiston in any model that also includes oxygen. Nor can you do any work at the molecular or sub-molecular level.

Similarly, thinking of electricity in terms of positively charged particles would be incompatible with atomic theory.

Comment author: Sniffnoy 29 December 2009 01:22:20AM 4 points [-]

Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling. Now, if you make enough observations, maybe you'll eventually conclude that "phlogiston is the absence of oxygen" (even though this isn't really correct), but at that point you can throw out the label "phlogiston". Contrariwise, if you say "oxidization causes fire", where "oxygen" is a previously known thing with known properties, then this allows you to actually make predictions about fire. E.g. the fact a candle in a sufficiently small closed space will go out before it melts, but not necessarily if there's a plant in there too. One pays rent, the other doesn't.

Comment author: DanielLC 29 December 2009 02:17:02AM 2 points [-]

You can make exactly the same predictions with phlogiston. If you burn coal next to iron, it will refine it. You could predict this with oxygen (oxygen is moving from the iron to the coal) or with phlogiston (phlogiston is moving from the coal to the iron).

It's like with electric charge. If you think of it as positive charge moving around, it has almost exactly the same predictive power as thinking of it as electrons moving around.

Comment author: Sniffnoy 29 December 2009 05:42:33AM 2 points [-]

But you can only predict it if you already know that a gain of phlogiston refines iron; if you don't, you can only observe it afterward and write it down as a property of phlogiston.

If you don't know anything about oxygen or phlogiston beforehand, then, sure, they're pretty much equally predictive, i.e., not very much. But if "oxygen" is not in fact just an arbitrary label as "phlogiston" is, but in fact something you're already working with in other ways, then they're not symmetric.

Also as Nick Tarleton points out below there are other asymmetries, though those are not so much in the predictive power.

Comment author: DanielLC 31 December 2009 12:42:43AM 0 points [-]

"But you can only predict it if you already know that a gain of phlogiston refines iron"

Same goes for oxygen.

Comment author: DanielLC 31 December 2009 01:47:12AM -1 points [-]

Okay, I admit that that's not really a prediction, but until then, they couldn't even explain it.

If you're going to do it like this, what's one thing oxygen predicted?

By the way, I'm responding to the fact that I lost two karma points on that, not any actual post.

Comment author: Sniffnoy 31 December 2009 01:59:17AM *  2 points [-]

That's what I just said.

Comment author: DanielLC 31 December 2009 02:25:24AM 1 point [-]

Sorry. Too used to defending my position to realize you're not attacking it.

Comment author: Nick_Tarleton 29 December 2009 02:43:28AM 9 points [-]

Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling.

The hypothesis went a little deeper than that. "Flammable things contain a substance, and its release is fire" lets you make many predictions — e.g., that things will burn in vacuum, or that things burned in open air will always lose mass (this is how it was falsified).

Comment author: Sniffnoy 29 December 2009 05:36:58AM 0 points [-]

Ah, true.

Comment author: DanielLC 31 December 2009 01:51:35AM -2 points [-]

Always gain mass, once they realized it was negative mass.

The idea that it doesn't always gain mass doesn't falsify phlogiston any more than it falsifies oxygen for the same reason.

Also, people didn't find the change in weight particularly useful, so this wasn't that big a problem.

Again, the vacuum thing isn't much of a problem either. It's not necessarily possible to purify phlogiston.

Comment author: bigjeff5 27 January 2011 05:44:27PM 2 points [-]

I'm not sure I follow, oxidization doesn't predict gaining or losing mass (on any scale like phlogiston would, that is), it predicts an interaction of materials forming a new composite substance. Oxidation doesn't prevent material from being lost or changed in other ways which could cause an overall greater or lesser mass than the original object. What it does predict, however, is that the total mass of all molecules in the equation, once accounted for, will be the same. This is consistent with observation.

If phlogiston has a negative mass, then anything that can burn must gain mass. I don't see any way around it. The theory states that it is a release of negative material, and there is no way to account for it once released.

One thing you would expect to find with phlogiston is an object that was primarily made up of phlogiston, giving it a negative mass. Explosives, for example, clearly have so much phlogiston that it literally rips the object (and anything nearby) apart when released. You would therefore expect all explosives to be relatively light in spite of the original weight of their components.

You could test this with black powder: saltpeter, charcoal, and sulfer each release a certain amount of phlogiston when burned. Combine them and a significantly more phlogiston is clearly released. You would therefore expect more phlogiston to have flowed into the material during the combination of the three objects during the making of gunpowder. However, the weights actually stay quite the same. The observation doesn't bear out the prediction, so the prediction is clearly wrong. If the prediction is wrong, the theory that made it is either wrong outright, or flawed in some way. Since the only prediction phlogiston can make is wrong, then the theory is at the very least flawed in some crippling way, and needs to be completely re-worked.

It's lack of ability to predict expectations is what killed it. You can predict what will happen when you add oxygen to a reaction. You cannot predict what will add phlogiston to a material, thereby allowing it to burn.

A huge example is the sun. It is a giant ball of fire - therefore, a giant ball of phlogiston, or at least a very significant portion of its mass to be made up of phlogiston in order to burn that intensely for that long. So it should have a low mass, possibly even a negative mass. Yet this giant ball of mostly phlogiston is actually the heaviest thing in the solar system by a massive margin.

Phlogiston is incompatible with many, many theories that have been independantly verified. Also, oxygen causing fire is not the theory. The theory is molecules and their chemical interactions, of which oxygen is just one type, and the predictions of oxygen causing most of the exothermic reactions is consistent with all other chemical reactions and is predictable based on rules that are consistent whether a reaction is exothermic or endothermic, among a great many other things. It also predicts which objects will burn and which will not. This same chemical theory leads to atomic theory, which predicts fusion, which has absolutely nothing at all to do with oxygen, yet describes the behavior of the sun very accurately before you even start to measure the sun's output.

The way to test a theory is to predict first, then observe. This is basic science. Phlogiston cannot pass this test, chemical theory can.

Comment author: thomblake 03 May 2010 02:18:54PM 2 points [-]

Just because I haven't seen the link in this particular discussion, some more defense of phlogiston link

Comment author: simplicio 06 March 2010 06:39:28AM 14 points [-]

I loved this post, but I have to be a worthless pedant.

If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.

Comment author: Eliezer_Yudkowsky 07 March 2010 11:32:03PM 5 points [-]

Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and "fixed" it.

Comment author: Dpar 11 May 2010 06:57:50AM *  2 points [-]

What about knowledge for the sake of knowledge? For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way. Does that then mean that this belief is completely worthless and on the same level as the belief in ghosts, psychics, phlogiston, etc.?

Wouldn't taking your chain of reasoning to its logical conclusion require one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch? After all, how much personal sensory experience do you have that confirms the existence of atoms, for example?

DP

Comment author: RobinZ 11 May 2010 03:30:12PM 3 points [-]

I think Eliezer's point is less strong than you think: for one thing, reading a history book is a sensory experience, and fewer history books would proclaim that The Crusades occurred in worlds where they had not than in worlds where they had.

Comment author: Dpar 07 June 2010 11:27:21AM *  1 point [-]

I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let's just call it a day.

DP

Comment author: Vladimir_Nesov 07 June 2010 12:25:34PM *  2 points [-]

I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let's just call it a day.

This is something of a fallacy of gray. Of course we can read his mind, through the power of human telepathy, by reading more on the same topic. We can't read minds perfectly, but perfect knowledge is never available anyway, and unless you can point out the specific uncertainty you have that decides the discussion, there is no sense in requiring more detail. You might want to stop the discussion for other reasons, but the reason you stated rings false.

Comment author: Dpar 09 August 2010 05:33:17PM *  1 point [-]

First of all, calling speech "human telepathy" strikes me as a little pretentious, as well as inaccurate, since the word "telepathy" is generally accepted to have supernatural connotations. Speech is speech; no need to complicate the concept.

Secondly, the article you linked seemed a little rambling and without a clear point. All I was able to take away from it is that the meaning of words is relative. If that's the case then I respond with "well, duh!"; if I missed a deeper point, please enlighten me.

Finally, when you take it upon yourself to question another person's purely subjective reasoning, you're treading very close to completely indefensible territory. If I say that I wanted to stop the discussion because I believe that the author's intended meaning is ambiguous, it's a tall order to question that that is indeed what I believe. Unless you can come up with clear evidence of how my behavior contradicts my stated subjective opinion, you more or less have to take my word that that really is what I think.

DP

Comment author: thomblake 09 August 2010 05:42:45PM *  1 point [-]

You misunderstand. Vladimir Nesov was not claiming that you don't believe that the author's intended meaning is ambiguous. Rather, he was claiming that your belief that "the author's intended meaning is ambiguous" is false, or at least not enough to constitute a good reason for stopping the discussion.

The point of calling speech 'human telepathy' in this instance is that you claimed there's no way to know what the author was thinking since we "can't read his mind". But there is a way to know what the author was thinking to some extent, so by reading your own reasoning backwards we therefore indeed can read minds.

Comment author: Dpar 09 August 2010 06:06:45PM -1 points [-]

I stated that taking the OP's reasoning to its logical conclusion requires one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch. RobinZ responded by saying that the OP's point is less strong than I think. Since two (presumably) reasonable people can disagree on what the OP meant, his point, as it is written, is by definition ambiguous.

Where do we go from here other than debate what he really meant? What is the point of such debate since neither of us has any special insight into his thought process that would allow us to settle this difference of subjective interpretations? I believe that to be sufficient reason for stopping the discussion. I'm not sure what specifically Vladimir takes issue with here.

As to your point of human telepathy -- comparing reading what someone wrote to reading his mind is a very big stretch. I can see how you could make that argument if you get really technical with word definitions, but I think that it is generally accepted that reading what a person wrote on a computer screen and reading his mind are two very different things.

DP

Comment author: thomblake 09 August 2010 06:20:15PM 3 points [-]

I stated that taking the OP's reasoning to its logical conclusion requires one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch.

Right, but RobinZ was not arguing against this claim (depending on what you mean by 'personally' here) but rather pointing out that your reasoning was flawed.

For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way.

RobinZ pointed out that your belief that the crusades took place affects your sensory experience; if you believe they happened, then you should anticipate having the sensory experience of seeing them in the appropriate place in a history book, if you were to check.

If you thought that your belief that the crusades happened did not imply any such anticipated experiences, then yes, it would be worthless and on the same level as belief in an invisible dragon in your garage.

Comment author: Dpar 09 August 2010 06:32:19PM *  0 points [-]

So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper.

DP

Edit: Also, I think that RobinZ didn't state that as something that she believed, she stated that as something that she believed the OP meant. It's that subjective interpretation of his position that I didn't want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.

Comment author: Oligopsony 09 August 2010 06:40:37PM 2 points [-]

What's important isn't the number of degrees of removal, but that the belief's being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts.

Now you could say: "reading about the Crusades in history books is strong evidence that 'the Crusades happened' is the current academic consensus," and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data - for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.

Comment author: Vladimir_Nesov 09 August 2010 06:45:13PM 0 points [-]

So reading about something in a book is a sensory experience now? I beg to differ.

You are disputing definitions. Reading something in a book is a sort of thing you'd change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don't, they are sometimes too part of your model of reality, but it's a more subtle point.

And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.

Comment author: anon895 09 August 2010 06:13:14PM 1 point [-]

I was expecting the link to be Mundane Magic.

Comment author: Vladimir_Nesov 09 August 2010 06:32:13PM *  0 points [-]

The point is not that the ability is "magical", but that it's real, that we do have an ability to read minds, in exactly the same sense as Dpar appealed to the impossibility of.

Comment author: RobinZ 11 May 2010 03:34:21PM 2 points [-]

Belatedly: Welcome to Less Wrong! Please feel free to introduce yourself.

Comment author: Dpar 07 June 2010 11:27:41AM *  0 points [-]

A belated thanks! :)

DP

Comment author: garethrees 12 May 2010 04:24:53PM 17 points [-]

You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”

I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.

There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.

Utopianism is the belief that an ideal society (or at least one that's much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, idealistic projects turn to failure, conflicts that are destructive and unresolved, portrayals of dystopian societies and argument against utopianism (not necessarily all of these at once, of course, but much more often than chance).

Literary categories are vague, of course, and one can argue about their boundaries, but they do make sense. H. G. Wells’ “A Modern Utopia” is a utopian novel, and Aldous Huxley’s “Brave New World” is post-utopian.

Comment author: NancyLebovitz 13 May 2010 12:28:18AM 7 points [-]

Would you consider Le Guin's The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.

Comment author: tog 21 October 2011 06:44:42AM 2 points [-]

Not to go off on a tangent, but I'd say it's more utopian than critical of utopia - I don't think we can require utopias to be perfect to deserve the name, and Anarres is pretty (perhaps unrealistically) good, with radical (though not complete) changes in human nature for the better.

Comment author: Jack 13 May 2010 12:32:30AM *  1 point [-]

Brave New World is definitely dystopian, not post-utopian. Nancy's suggestion for post-utopian is exactly right. I definitely agree that we can meaningfully classify cultural production, though.

Comment author: garethrees 13 May 2010 11:46:22AM 8 points [-]

I think it's both. "Brave New World" portrays a dystopia (Huxley called it a "negative utopia") but it's also post-utopian because it displays skepticism towards utopian ideals (Huxley wrote it in reaction to H. G. Wells' "Men Like Gods").

I don't claim any expertise on this subject: in fact, I hadn't heard of post-utopianism at all until I read the word in this article. It just seemed to me to be overstating the case to claim that a term like this is meaningless. Vague, certainly. Not very profound, yes. But meaningless, no.

The meaning is easily deducible: in the history of ideas "post-" is often used to mean "after; in consequence of; in reaction to" (and "utopian" is straightforward). I checked my understanding by searching Google Scholar and Books: there seems to be only one book on the subject (The post-utopian imagination: American culture in the long 1950s by M. Keith Booker) but from reading the preview it seems to be using the word in the way that I described above.

The fact that the literature on the subject is small makes post-utopianism an easier target for this kind of attack: few people are likely to be familiar with the idea, or motivated to defend it, and it's harder to establish what the consensus on the subject is. By contrast, imagine trying to claim that "hard science fiction" was a meaningless term.

Comment author: David_Gerard 02 December 2010 02:12:56PM *  7 points [-]

Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It's really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.

Comment author: TheOtherDave 02 December 2010 03:46:53PM 3 points [-]

Yes.

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

The idea that the categories we experience as "in the world" are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise.

I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would.

Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought.

I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.

Comment author: David_Gerard 02 December 2010 03:51:29PM 1 point [-]

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

Oh yeah. While it's far from a worthless field, and straw postmodernists are a sign of lazy thinking, it is also the case that postmodernism contains staggering quantities of complete BS.

Thankfully, these are also susceptible to postmodernist analysis, if not by those who wish to keep their status ...

Comment author: BarbaraB 14 June 2012 08:55:57PM 0 points [-]

I played a mental game trying to make predictions based on the information, that Wulky Wilkinsen is post-utopian and shows colonial allienation - never heard of any of that before :-). Wulky Wilkinsen is post-utopian ... I expect to find a bunch of critically acclaimed authors, who wrote their most famous books before Wulky wrote his most famous books (5 - 15 years ahead ?), lived in the same general area as Wulky, and portrayed people who were more altruistic and prone to serve general good than we normally see in real life. It does not say too much about the actual writing style of Wulky - he could have written either in the similar way as "the bunch" (utopians), or just the opposite - he could have been just fed up by the utopians' style and portray people more evil than we normally see in everyday life. So my prediction does not tell what Wulky's books feel like, but it is still a prediction, right ? Colonial allienation - the book contains characters that have lived in a colony (e.g. India) for a long time (athough they might have just arrived to the "maternal" colonial country, e.g. Britain). These characters are confronted with other characters that have lived in the "maternal" colonial country for a long time (athough they might have just arrived to the colony :-) ). There are conflicts between these two groups of people, based on their background. They have different preferences when they are making decisions, probably involving other people. Thus they are allienated. Do not tell me this was not the point of Eliezer's post, let me just have some fun !

Comment author: Leafy 13 May 2010 12:56:52PM -1 points [-]

How is this not just a simple arguement on semantics (on which I believe a vast majority of arguements are based)?

They both accept that the tree causes vibrations in the air as it falls, and they both accept that no human ear will ever hear it. The arguement appears to be based solely on the definition, and surrounding implications, of the word "sound" (or "noise" as it becomes in the article) - and is therefore no arguement at all.

Comment author: bigjeff5 27 January 2011 06:12:59PM 2 points [-]

I think that may have been the point:

The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

You can define a thing based on any criteria you like. It simply has to allow your expectations to agree with reality in order for it to be true.

One says "it is sound because it vibrates regardless of whether anyone hears it." This person believes that sound is the vibrations.

The other says "it is not sound because it is never processed in a mind." This person does not deny that the vibrations exist, he simply believes it isn't sound until someone hears it.

These two have different definitions of "sound", but within their definitions both allow expectations that are completely consistent with reality. The point is to make sure your beliefs "pay rent" - that they allow you to have expectations that match up with reality. If the second person had the same belief of what sound was as the first (i.e. vibrations in the air), yet also believed that vibrations in the air do not occur when there is nobody to hear them, that belief would not pay rent. When they recorded the sound with nobody around he would expect there to be nothing at all on the tape, yet there would be something on the tape. The only way to resolve this is to adjust your belief after the fact, which means your belief couldn't pay its rent.

Comment author: timtyler 21 August 2010 10:07:09AM *  0 points [-]
Comment author: Rain 22 August 2010 01:14:20PM 2 points [-]

This video has sound problems which immediately turned me off wanting to try and parse what he's saying. I suggest using a microphone and properly syncing the sound if they intend to do many more of these.

Comment author: alexvermeer 04 January 2011 07:07:01PM 3 points [-]

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his book? Nothing."

When I first read this I thought, "Huh? Surely it tells you something, because I already have beliefs about what 'utopian' probably means, and what the 'post' part of it probably means, and what context these types of terms are usually used in... That sounds like a whole bag of reasons to expect certain things/themes/ideas in his book!"

But I think this missed the point Eliezer is making; a point I suggest would be more clear if he said:

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "barnbeanbaggle". What does this mean you should expect from his book? Nothing."

Darn right. I have no idea what a "barnbeanbaggle" is. It creates no anticipations about what I"ll find in his book; it's free-floating.

Comment author: ata 04 January 2011 07:46:19PM *  3 points [-]

Free-floating beliefs have to at least feel like beliefs. You can't even think you have a belief about whether Wulky Wilkinsen is a barnbeanbaggle unless you think you have some idea of what "barnbeanbaggle" is being used to mean. The thing about using a made-up word is that it's too easy to notice that you don't know what to anticipate from it. The thing about "post-utopian" is that, even if you have some idea of what "post-utopian" is supposed to mean, being told (by someone you perceive as sufficiently authoritative) that a certain author is "post-utopian" is quite likely to just make you selectively interpret that author's works to fit that schema. Similar to how you can make professional wine tasters describe a white wine the way they usually describe red wines by dying it red.

Comment author: alexvermeer 04 January 2011 09:20:07PM 1 point [-]

The made-up word being too easy to notice is a good point.

  1. "I believe Wulky is a post-utopian."
  2. "The professor says Wulky is a post-utopian, and I expect to figure out what the term means and confirm or disconfirm this claim by reading his book."

When I first read this post I thought (2), and if I understand it right, the post is attacking (1).

I may be getting too tied-up with the labels being used...

Comment author: Will_Sawin 05 January 2011 10:29:01PM 0 points [-]

You originally misunderstood Eliezer's point, and now understand it.

If many people will similarly misunderstand it, that is a reason for Eliezer to change it on lesswrong or if/when it appears in his book. If you are relatively unusual, it is only a weak reason.

Reasons not to change it would be a lack of viable alternatives. Can we think of an alternative better than "post-utopian" or "barnbeanbaggle"? For example, a less meaningful term from literary theory or another field?

Comment author: BarbaraB 14 June 2012 08:07:18PM 1 point [-]

My boyfriend just suggested "metaspontaneity" !

Comment author: BarbaraB 14 June 2012 09:03:04PM 0 points [-]
Comment author: MoreOn 25 February 2011 06:45:42PM *  4 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautiful.

For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.

But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.

Comment author: Spurlock 25 February 2011 07:40:26PM *  4 points [-]

It's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department.

In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality.

Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.

Comment author: Manfred 25 February 2011 07:54:04PM 3 points [-]

The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end.

However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.

Comment author: TheOtherDave 25 February 2011 09:04:38PM 1 point [-]

Well, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition.

But, admittedly, he might not.

Anyway, sure, if Joe "terminally" values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world.

That said, I don't actually know anyone for whom this is true.

Comment author: MoreOn 25 February 2011 11:29:11PM 0 points [-]

That said, I don't actually know anyone for whom this is true.

I don't know too many theist janitors, either. Doesn't mean they don't exist.

From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.

Comment author: jimrandomh 25 February 2011 10:12:36PM 5 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards has a partial taxonomy of them.

Comment author: HonoreDB 26 February 2011 01:47:39AM 0 points [-]

I don't think it's possible for a reflectively consistent decision-maker to gain utility from self-deception, at least if you're using an updateless decision theory. Hiding an unpleasant fact F from yourself is equivalent to deciding never to know whether F is true or false, which means fixing your belief in F at your prior probability for it. But a consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

Comment author: jimrandomh 26 February 2011 03:04:19AM *  2 points [-]

A consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

No, this is not true. Many of the reasons why true beliefs can be bad for you are because information about your beliefs can leak out to other agents in ways other than through your actions, and there is is no particular reason for this effect to be linear. For example, blocking communications from a potential blackmailer is good because knowing with probability 1.0 that you're being blackmailed is more than 5 times worse than knowing with probability 0.2 that you will be blackmailed in the future if you don't.

Comment author: HonoreDB 26 February 2011 05:12:04PM 0 points [-]

Oh, sure. By "gain utility" I meant "gain utility directly," as in the average Joe story.

Comment author: jimrandomh 26 February 2011 05:20:27PM 0 points [-]

I don't think it's linear in the average Joe story, either; if there's one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.

Comment author: HonoreDB 26 February 2011 05:47:07PM 1 point [-]

A rational agent can have its behavior depend on a threshold crossing of belief, but if there's some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

Comment author: jimrandomh 26 February 2011 05:58:54PM 0 points [-]

Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

This doesn't sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.

Comment author: HonoreDB 26 February 2011 07:33:13PM *  0 points [-]

An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P--Joe's behavior should be identical if U(p)=p^2, so for simplicity I'll ignore the C.

Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he'll pay up to that amount for it.

Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe's overall chance of becoming attractive is .75, so he'll pay U(.75)-U(0)=.75^2=0.5625 for the deal.

Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he'll pay .5625-.25=.3125 for the upgrade.

Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.

As a sanity check, let's look at how it would go if Joe's U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.

Comment author: jimrandomh 26 February 2011 08:25:59PM *  2 points [-]

You're missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.

Suppose Joe is uncertain whether he's attractive or not - he assigns it a probability of 1/3. Someone offers to tell him the true answer. If Joe's utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2/9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = -0.244, so he plugs his ears.

Comment author: NancyLebovitz 25 February 2011 10:21:16PM 0 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

Is there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don't think that makes for an important difference.

Comment author: MoreOn 25 February 2011 11:19:03PM 0 points [-]

"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.

I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.

Comment author: JGWeissman 25 February 2011 11:17:30PM 0 points [-]

In this example, Joe's belief that he's smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly.

The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe's beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.

Comment author: MoreOn 25 February 2011 11:24:56PM 0 points [-]

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

Comment author: JGWeissman 25 February 2011 11:32:24PM *  0 points [-]

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

No, for an example of beliefs that don't pay rent in any anticipated experience, see the first 3 paragraphs of this article:

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

Comment author: MoreOn 25 February 2011 11:34:40PM *  1 point [-]

Two people have semantically different beliefs.

Both beliefs lead them to anticipate the same experience.

EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.

Comment author: JGWeissman 25 February 2011 11:53:11PM 2 points [-]

Two people have semantically different beliefs.

Taboo "semantically".

See also the example of The Dragon in the Garage, as discussed in the followup article.

Comment author: MoreOn 26 February 2011 12:31:18AM 0 points [-]

Taboo'ed. See edit.

Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."

Comment author: Steven_Bukal 27 June 2011 07:41:47PM 1 point [-]

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

Or they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.

Comment author: buybuydandavis 21 September 2011 09:43:35AM *  3 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

I think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it.

Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.

Comment author: rabidchicken 16 March 2011 01:09:25AM 2 points [-]

This post probably changed the way I regulate my own thoughts more than any other. How many arguments I have heard never would have happened if everyone involved read this...

Comment author: undermind 13 April 2011 11:40:01PM 1 point [-]

Based on this, I would very much like to make a variant of Monopoly, with beliefs/theories in place of properties, and evidence for money. Invest a large chunk to establish a belief, with its rent determined by sophistication and usefulness of prediction, ranging from Aristotelian physics to relativity, spermatists & ovists to Darwinian evolution, and so on. Other players would have to give you some credit when they land on your theories, and admit that they give results.
This would also be a great way to teach some history of science, if well designed.
Of course, the analogy becomes interesting when you consider what corresponds to the cutthroat capitalism...

Comment author: mendel 19 May 2011 01:34:21PM *  2 points [-]

I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).

  • The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better understanding. (I base this reading on Jim Loy's page on Phlogiston Theory.

  • Literary genres don't have much predictive powers if you don't know anything about them - if you do, then they do. Classifying a writer as producing "science fiction" or "fantasy" creates anticipations that are statistically meaningful. For another comparison, saying some band plays "Death Metal" will shape our anticipation; somewhat differently for those who can distinguish Death Metal from Speed Metal as compared to those who merely know that "Metal" means "noise".

I can imagine beliefs leading to false anticipations, and they're obviously inferior to beliefs leading to more correct ones. That doesn't mean they're free-floating.

One example for the free-floating belief is actually about the tree falling in the forest: to believe that it makes a sound does not anticipate any sensory experience, since the tree falls explicitly where nobody is around to hear it, and whether there is sound or no sound will not change how the forest looks when we enter it later. However, to let go of the belief that the tree makes a sound does not seem to me to be very useful. What am I missing?

I understand that many beliefs are held not because they have predictive power, but because they generalize experiences (or thoughts) we have had into a condensed form: a sort of "packing algrithm" for the mind when we detect something common; and when we understand this commonality enough, we get to the point where we can make prediction, and if we don't yet, we can't, but may do so later. There is no belief or thought we can hold that we couldn't trace back to experiences; beliefs are not anticipatory, but formed from hindsight. They organize past experience. Can you predict which of these beliefs is not going to be helpful in organizing future experiences? How?

Comment author: allenpaltrow 03 June 2011 05:43:50PM *  0 points [-]

I think that this is really a discussion of explanatory power, of which scientific causation is one example. All theories attempt to explain a set of examples. Scientific theories attempt to explain causation in natural phenomena, thus their "explanatory power" is proportional to their predictive power. A unified theory of forces at the planetary and subatomic levels would explain more examples than any do now, thus it would have great explanatory power.

Yet causation isn't the only type of explanatory relationship. Causation implies time and events, whereas these are only one type of explanation. For example, the Pythagorean theorem explains why physical right triangles in reality have the lengths that they do. It doesn't "cause" them to have the properties they do. It would be foolish to say that any property of physical triangles "explains" or "proves" the Pythagorean theorem, because mathematical truths exist independent of practicalities. Plato's dialogue The Euthyphro beautifully explains why even if the set of things which are x and the set of things which are y are equivalent (in that case, the set of pious actions and the set of god loved actions,) they are not the same quality if one (god loved) explains the other (piety) and not vice versa. Similarly, the total number of hydrogen atoms in a glass of water is always even, but it is the quality of evenness (any number which is a multiple of two must be even) that explains this, not any quality of hydrogen. The one "explains" (but does not "cause") the other.

Thus, I think some parts of this post would be better understood as being stated as thus: any theory which provides no additional explanatory power should be ignored.

So, looking at the case of Phlogiston, the OP is not saying it is "wrong," but that it lacks the explanatory power that justifies it as a useful theory. If I take the Neils Bohr model of the atom, and say that there are extra invisible subatomic particles, and that these particles are "god," you would be hard pressed to prove me wrong. But this theory does not predict any new phenomena, nor is it falsifiable, nor, most importantly, does it have an explanatory relationship with any other known truth about atoms: none of them explain this theory, and it explains none of them. It exists completely independent from any other aspect of atomic theory, thus it lacks any explanatory power as a theory.

Yet there are theories which have great explanatory power but not empirical predictive power. Lets say I'm a simplistic deontologist who says that killing is wrong because human life is good. Along comes a utilitarian who says, I have a theory which explains, in all the cases where you're right, why you are right, and in those cases where you aren't, why you aren't, according to your own first principle. In terms of my very simplistic ethical theory, the utilitarian would absolutely be "less wrong" than me, for he has provided a theory which better explains the hard cases my theory failed to (justified killings, kill 1 save 2 etc.)

In the case of the post-utopian author, I think that we again are getting wrapped up in "prediction" when we should concern ourselves with explanation.

What is a plumber? Is it a man who comes to your house, sits on your couch, eats your food, watches your TV, and flirts with your wife? Even if this is true of all plumbers, it is not the definition of plumber. Definitions should be proscriptive, such that they give you the means to determine what counts as an x, and what a good x is. If a plumber fixes pipes, anyone who fixes pipes is a plumber, a good plumber fixes them well, and no one who doesn't fix pipes is a plumber.

Thus, hold literary labels to the same standard. Don't ask, "is this label true"? Because as we saw earlier with the god particle example, many theories cannot be proven false but still have greater or lesser explanatory power (see economics, ethical theories etc). The better standard is explanatory power. Is there a definition of the quality "post-utopian" such that any book with quality x is post-utopian, x explains why it counts as post utopian, and the more x it is, the more it is post-utopian it is? Saying post-utopian is a,b,c,d,e,f,g,h, but failing to provide a single explanation of the aforementioned form is like calling the plumber a man who eats your food and flirts with your wife: it is a descriptive definition, not a proscriptive definition. It may be true of the every plumbers, but it is not the thing that makes plumbers count as plumbers.

I think the OP meant to say that literary labels like post-utopianism fail to meet this standard. Sure, you can come up with descriptive statements of the terms which may be true (post-utopian books do not portray utopian societies as possible) but this is not a definition because it is not this quality that a. makes post-utopian books count as utopian, b. without which a book cannot be post-utopian, and c. designates a clear set of books which either are, or are not, post-utopian. Textual analysis perhaps can be more wrong and "less wrong," but literary theories are just not the sorts of truth-bearing statements that mathematical, scientific, or philosophical theories are.

Compare "post-utopian" to "even". Even numbers are a set of specific numbers, but there is a single quality they have (being multiples of 2) which explains why they are in the set. Without that quality, they would, "by definition", not be even. This is the standard we should be looking for in definitions and theories. Not just that they are "true" (plumbers do steal your food, watch your tv, and flirt with your wife) but that they have the sort of explanatory power we've isolated.

Thus, I think the larger point of the post stands. There are better theories and worse theories, and we should prefer the better ones.

Comment author: Alicorn 03 June 2011 06:00:36PM 1 point [-]

deontologist who says that killing is wrong because human life is good.

Aaaaaaaaugh.

Comment author: allenpaltrow 03 June 2011 06:44:18PM *  0 points [-]

I'm not trying to define the terms, just posit a very very simple theory of the form killing is wrong because human life is good. Such a theory would be inferior on its own premises than a very very simple utilitarianism, regardless of whether either or the premise itself is true. As such I oversimplified utilitarianism just as much, but it doesn't matter for the scope of the example.

Edit: in fact, for the purposes of the example it is better if the "deontologist" is wrong about deontology, because it better illustrates how one theory can have greater explanatory power than another only on the grounds of the former's justification without reference to external verifiability. "human life is good" is a poor first principle, but if it is true, the utilitarian's principle applies it better than the "deontologist's" did.

Comment author: Alicorn 03 June 2011 06:53:14PM 0 points [-]

Someone who believes that killing is wrong because human life is good is not a deontologist. See here.

Comment author: allenpaltrow 03 June 2011 07:32:25PM 0 points [-]

Here the deontologist is arguing for the principe 'killing is wrong regardless of the consequences' (deontic) but uses a poor justification for which consequentialism is a more reasonable conclusion. So the 'deontologist' is wrong even though his principle cannot be externally verified. I was just (unclearly I see) using this strawman to illustrate how theories could be better and worse at explaining what they attempt to explain without being the sorts of things which can be proven. I will attempt to be clearer in future.

Comment author: potato 15 June 2011 10:50:22AM *  0 points [-]

Wonderful exposition of versificationism (I meant verificationism lol, but I won't change it cause I like the reply bellow). I do have a question though. You said:

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly.

Well yes, we don't directly observe atoms (actually we do now but we didn't have to). But it is still save to say that if a belief doesn't make predictions about future sensory experiences it is meaningless, or at least unverifiable. Those predictions may be about the shape of ink squiggles on a piece of paper after some rules are applied, or they may be a prediction about the pattern that a monitor's many pixels will form after reacting to some instrument in an experiment. In either case, the hypothesis is always linked to the world by the senses, or are you claiming something different?

Comment author: gjm 20 June 2011 11:10:07AM 3 points [-]

Wonderful exposition of versificationism.

Versificationism is presumably the doctrine that the truth of a proposition should be evaluated on the basis of how easily it can be expressed in poetic form. Empirically, this seems to favour any number of probably-untrue beliefs, so I'm inclined to reject it. :-)

I have in fact seen something a little like this, in a more sophisticated form, maintained seriously. For instance, here's Dorothy L Sayers (the context is her series of radio plays "The man born to be king"). "From the purely dramatic point of view the theology is a enormously advantageous, because it locks the whole structure into a massive intellectual coherence. It is scarcely possible to build up anything lop-sided, trivial or uinsound on that steely and gigantic framework. [...] there is no more searching test of a theology than to submit it to dramatic handling; nothing so glaringly exposes inconsistencies in a character, a story, or a philosophy as to put it upon the stage and allow it to speak for itself. [...] As I once made a character say in another context: 'Right in art is right in practice'; and I can only affirm that at no point have I yet found artistic truth and theological truth at variance."

And, though I disagree with her entirely on the truth of the sort of theology she's writing about, I think she does actually have a point of sorts. But a professional writer of fiction like Sayers really ought to have known better than to suggest that truth can be distinguished from untruth by seeing how easily each can be formed into art.

Comment author: AspiringRationalist 19 July 2012 08:27:00PM 16 points [-]

A related epistemology that is popular in the business world is PowerPointificationism, which holds that the truth of a proposition should be evaluated by how easily it can be expressed in PowerPoint. Due to the nature of PowerPoint as a means of expression, this epistemology often produces results similar to those of Occam's sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam's razor, Occam's sand-blaster does not require that the explanation be consistent with observation).

Comment author: TheOtherDave 19 July 2012 08:56:04PM 5 points [-]

Occam's sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam's razor, Occam's sand-blaster does not require that the explanation be consistent with observation).

...and I just spit coffee on my keyboard.

That's marvelous... is that original with you?

Comment author: fubarobfusco 15 September 2012 05:45:14PM 0 points [-]

I take it you're familiar with Edward Tufte's "The Cognitive Style of PowerPoint"?

Comment author: bibilthaysose 30 July 2011 01:40:38PM 0 points [-]

Good article. Some thoughts:

I probably constrain my experiences in lots of ways that I don't even know about, but I don't think there's always a way to know whether a belief will constrain your experiences, even if it is based on empirical (or even scientific) observation. Isaac Newton's beliefs constrained all of our beliefs for centuries. Scholars were so unwilling to question classical mechanics that they came up with this "ether" stuff that could never be observed directly, and thus didn't further constrain their experience, but had the nice side effect of resolving inconsistencies in their previously held theories. However, even though Einstein's theory was more correct than Newton's, without Newton's theory mechanical engineering wouldn't exist, and without Einstein's, the Bomb wouldn't exist. I mean this is obviously a gross oversimplification of the development of the Bomb, but I'm just saying there's not much use for relativity outside of a classroom/particle accelerator.

Comment author: army1987 16 September 2011 11:04:46AM 6 points [-]

there's not much use for relativity outside of a classroom/particle accelerator

Global Positioning System

Comment author: Ab3 02 February 2012 10:15:56PM 1 point [-]

I understand that having beliefs that are falsifiable in principle and make predictions about experience is incredibly important. But I have always wondered if my belief in falsifiability was itself falsifiable. In any possible universe I can imagine it seems that holding the principle of falsifiability for our beliefs would be a good idea. I can't imagine a universe or an experience that would make me give this up.

How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can't tell what. Please help!

Comment author: TheOtherDave 04 February 2012 04:25:01AM 2 points [-]

Excellent question!

Excellent, because it illustrates the problem with "believing in" the principle of falsifiability, as opposed to using it and understanding how it relates to the rest of my thinking.

Forget that the principle of falsifiability is itself incredibly important. What sorts of beliefs does the principle of falsifiability tell me to increase my confidence in? To decrease my confidence in?

What would the world have to be like for the former beliefs to be in general less likely than the latter?

Comment author: Ab3 04 February 2012 09:51:24PM 0 points [-]

Thanks for the reply Dave. Are you saying I should not look at falsifiability as a belief, but rather a tool of some sort? That distinction sounds interesting but is not 100% clear to me. Perhaps someone should do a larger post about why the principle should not be applied to itself.

I have also thought of putting the problem this way: Eliezer states that the only ideas worth having are the ones we would be willing to give up. Is he willing to give up that idea? I don't think so..., and I would be really interested to know why he doesn't believe this to be a contradiction.

Comment author: TheOtherDave 05 February 2012 01:55:24AM 2 points [-]

What I'm saying is that the important thing is what I can do with my beliefs. If the "principle of falsifiability" does some valuable thing X, then in worlds where the PoF doesn't do X, I should be willing to discard it. If the PoF doesn't do any valuable thing X, then I should be willing to discard it in this world.

Comment author: Ab3 09 February 2012 06:53:00PM 0 points [-]

It seems we have empirical and non-empirical beliefs that can both be rational, but what we mean by “rational” has a different sense in each case. We call empirical beliefs “rational” when we have good evidence for them, we call non-empirical beliefs like the PoF “rational” when we find that they have a high utility value, meaning there is a lot we can do with the principle (it excludes maps that can’t conform to any territory).

To answer my original question, it seems a consequence of this is that the PoF doesn’t apply to itself, as it is a principle that is meant for empirical beliefs only. Because the PoF is a different kind of belief from an empirical belief, it need not be falsifiable, only more useful than our current alternatives. What do you think about that?

Comment author: TheOtherDave 09 February 2012 11:28:16PM 1 point [-]

I think it depends on what the PoF actually is.

If it can be restated as "I will on average be more effective at achieving my goals if I only adopting falsifiable beliefs," for example, then it is equivalent to an empirical belief (and is, incidentally, falsifiable).

If it can be restated as "I should only adopt falsifiable beliefs, whether doing so gets me anything I want or not" then there exists no empirical belief to which it is equivalent (and is, incidentally, worth discarding).

Comment author: TimS 04 February 2012 04:50:31AM *  0 points [-]

For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

Because falsifiability is a definition, not a theory about the world, there's no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that's conceptually different.

Comment author: Jayson_Virissimo 04 February 2012 09:00:39AM *  0 points [-]

For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

Because falsifiability is a definition, not a theory about the world, there's no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that's conceptually different.

Falsifiability is a very bad way to define science (or scientific theories). If falsifiability was all it took for a theory to be scientific, then all theories known to be false would be scientific (after all, if something is known to be false, it must be falsifiable). Do we really want a definition of science that says astrology is science because it's false?

Comment author: JoachimSchipper 04 February 2012 09:49:55AM 0 points [-]

Astrology does seem to consist of scientific hypotheses.

Comment author: Jayson_Virissimo 04 February 2012 11:02:28AM 0 points [-]

Astrology does seem to consist of scientific hypotheses.

I chose astrology because it has a reverse halo effect around here (and so would serve me rhetorically). Feel free to replace it with any other known to be false set of propositions.

Comment author: TimS 04 February 2012 05:31:25PM 0 points [-]

I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics.

That said, Newton's physics and geocentric theories are false. Are they not science simply for that reason?

Comment author: Jayson_Virissimo 05 February 2012 06:21:48AM *  0 points [-]

I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics.

Yes. Falsifiability is a poor definition of science and is self-undermining in the sense that it can't pass its own test.

That said, Newton's physics and geocentric theories are false. Are they not science simply for that reason?

Of course not. I'm not claiming a scientific theory must be true. I'm claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.

Comment author: TimS 06 February 2012 12:46:37AM 0 points [-]

A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

That statement does not itself constrain experience. That's not a useful critique of the statement.

I'm claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.

Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false.

Falsifiability is simply the requirement that a scientific theory to list things that can't happen under that theory. Falsifiability says scientific theory don't look for evidence in support, they look for evidence to test the theory.

The fact that no false statements appear doesn't mean that the scientific theory isn't falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.

Comment author: gwern 06 February 2012 01:50:19AM 2 points [-]

That statement does not itself constrain experience. That's not a useful critique of the statement.

That doesn't seem true. The statement seems to perfectly constrain experience: you will not experience situations where theories which do not constrain experience will still be falsified.

And indeed, watching the world go by over the years, I see theories like 'Christianity' or 'psychoanalysis' which do not constrain experience at all have yet to be falsified - exactly as predicted.

Comment author: TimS 06 February 2012 02:32:17AM 0 points [-]

Fine, you want to be contrary. What experience would falsify the partial definition of scientific theory that I have labelled "the principle of falsifiability"? If no such experience exists, does this call into doubt the usefulness of the principle?

Comment author: Jayson_Virissimo 06 February 2012 08:56:28AM *  0 points [-]

Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false.

Falsifiability is simply the requirement that a scientific theory to list things that can't happen under that theory. Falsifiability says scientific theory don't look for evidence in support, they look for evidence to test the theory.

The fact that no false statements appear doesn't mean that the scientific theory isn't falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.

Nothing in this reply contradicts anything I have asserted. I was merely claiming that if falsifiability is a sufficient condition for a hypothesis to be "scientific", then all theories known to be false are scientific (because if we know they are false, then they must be falsifiable). I'm not being contrarian; I'm pointing out a deductive consequence of the very definition of falsifiability that you linked to. Hopefully this closes the inferential distance:

  • If a hypothesis is falsifiable, then it is scientific.
  • If a hypothesis is known to be false, then it is falsifiable.
  • Therefore, if a hypothesis is known to be false then it is scientific.

I am merely denying the first premise via reductio ad absurdum, because the conclusion is obviously false (and the second premise isn't). If you took my claim to be something other than this, then you have simply misread me.

Comment author: TimS 06 February 2012 02:59:51PM 1 point [-]

That's much clearer. I didn't intend to assert that falsifiability was a sufficient condition for a theory being scientific, only that it is a necessary condition. That's what I mean by saying it was a partial definition.

Thus, I don't intend to assert the first sentence of your syllogism. Instead, I would say, "If a hypothesis is not falsifiable, then it is not scientific." Adding the second statement yields: "If a hypothesis is know to be false, then it might be scientific." That's a true statement, but I don't claim it is very insightful.

Comment author: nshepperd 06 February 2012 10:39:35AM *  1 point [-]

*shrug*

I don't think the current line of enquiry is particularly useful.

"Astrology works" is a scientific theory to the degree that it is, in fact, acceptable science to do an experiment to see whether or not astrology has predictive power. It's rhetorically inaccurate to say that means "astrology is science" though, because of course the practice of astrology is not. But sure, it's probably a good idea to include other conditions. Excessively unlikely (or non-reductionist?) hypotheses could be classified as non-scientific, for the simple reason that even considering them in the first place would be a case of privileging the hypothesis.

None of this contradicts falsifiability being "a way of distinguishing scientific theories about the world from other theories about the world", if we have other ways of distinguishing scientific from non-scientific, such as "reductionism".

Comment author: [deleted] 05 February 2012 07:27:32AM *  2 points [-]

How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can't tell what.

You have just refuted the contention that all warranted beliefs must be falsifiable in principle. Karl Popper, who introduced the falsifiability criterion and pushed it as far if not further than it can go, never advocated that all beliefs should be falsifiable. Rather, he used falsifiability as the criterion of demarcation between science and non-science, while denying that all beliefs should be scientific. His contention that falsifiability demarcates science does imply, as he recognized, that the criterion of falsifiability is not itself a scientific hypothesis.

Rational beliefs are not necessarily scientific beliefs. Mathematics is rational without being falsifiable. The same is true of philosophical beliefs, such as the belief that scientific beliefs are falsifiable. But rational beliefs that are not scientific must be refutable, and falsifiable beliefs are a proper subset of refutable beliefs. Falsifiable beliefs are refutable in one particular way: they are refutable by observation statements, which I think are equivalent to EY's anticipations. Science is special because it is 1) empirical (unlike mathematics) and 2) has an unusual capacity to grow human knowledge systematically (unlike philosophy). But that does not imply that we can make do with scientific beliefs exclusively, one reason being the one that you mention about criteria for the acceptance of scientific theories.

The broader criterion of refutability doesn't necessarily involve refutation by observation statements. How would you refute the falsifiability criterion? It would be false if science it were the case that scientists secured the advance of science by using some other criteria (such as verification).

It's a mistake to conflate the questions of whether a theory is scientific and whether it's corroborated (by attempted falsifications). Or to conflate whether it's scientific or it's rationally believable. Theories aren't bad because they aren't science. They're bad because they're set up so they resist any form of refutation. Rational thought involves making your thinking vulnerable to potential refutation, rather than protecting it from any refutation.In science, the mode of refutation is observation, direct connection to sensory data. But it won't do (as you've realized by trying to apply falsifiability to itself) to limit one's thinking entirely to that which is falsifiable.

You later ask (in effect) whether the refutability criterion is itself even refutable. Would EY be willing, ever, to give it up? He should be, were someone to show that sheer dogmatism conduces to the growth of knowledge. That I can't conceive of a plausible argument to that end doesn't obviate the refutability of the contention

I think that resolves your confusion, but I don't want to imply that Popper uttered the last word—there are problems with neglecting verification in favor of strict falsificationism.

Comment author: Ab3 09 February 2012 06:30:24PM 0 points [-]

Thank you for your thoughts.

What are the criteria that we use for accepting or refuting rational non-empirical beliefs? You mention that falsifiability would be refuted if some other criteria “secured the advance of science.” You also mention that we should give up the refutability criterion if “sheer dogmatism conduces to the growth of knowledge.” It sounds like our criteria for the refutability of non-empirical beliefs are mostly practical; we accept the epistemic assumptions that make things “work best.” Is there more to it than this?

Comment author: [deleted] 10 February 2012 03:57:13AM *  1 point [-]

To be pedantic and Popperian, I'd have to correct your use of "empirical beliefs." The philosophical positions at issue aren't scientific but they are empirical. "Empirical"--to be the basis for scientific observation statements-- must be expressible in low-level observation sentences that all competent scientists agree on.

The belief in question is that science's crucial distinguishing feature allowing it to advance is the subjection of science's claims to empirical testing, allowing strict falsification. We can't run an experiment or otherwise record observation statements, so we resort to philosophical debate aimed at refutation. Refutation is obtained by plausible argument. For instance, in the discussion about demarcation, an example of a potentially plausible argument goes if we relied on falsification exclusively, we would never have evidence that a claim is true, only that it isn't false. But we rely on scientific theories and consider them close to the truth (or at least as probably so). Therefore, falsifiability can't explain the distinctiveness of science.

This involves highly plausible claims, based on observation, about how we in fact use scientific theories. But although the result of observation, it can't be reduced to something everyone agrees on that is closely tied to direct perception, as with an observation statement.

Comment author: vinayak 15 May 2012 04:26:41AM 1 point [-]

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person's test has turned out to be positive and the second person's test has turned out to be negative, they say "yes" and "no" respectively as answers to the question, "Did the tree make any sound?"

So the problem here doesn't seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn't. But it seems like that's because of a different reason.

Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn't, if Y is negative, he concludes it did not make a sound and so on. Here, X could be "cause air vibration" for example. For all other kinds of observations, A has a don't care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X', Y', Z' of crucial observations and other observations lie in the set of don't cares. The problem here is just that X,Y, Z are completely disjoint from X', Y', Z'. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.

Comment author: prashantsohani 02 June 2012 10:18:27PM 0 points [-]

Suppose someone, on inspecting his own beliefs to date, discovers a certain sense of underlying structure; for instance, one may observe a recurring theme of evolutionary logic. Then while deciding on a new set of beliefs, would it not be considered reasonable for him to anticipate and test for similar structure, just as he would use other 'external' evidence? Here, we are not dealing with direct experience, so much as the mere belief of an experience of coherence within one's thoughts.. which may be an illusion, for all we know. But then again, assuming that the existing thoughts came from previous 'external' evidence, could one say that the anticipated structure is indeed well-rooted in experience already?

Comment author: abbyjh 11 July 2012 11:13:24PM 1 point [-]

I was reading those 'what good is math?' and 'what good is music' comments. You can determine what if any 'system' is good or bad based on the understanding or misunderstanding of the variables involved.

i.e: one does not have any use for math if they do not understand any of the vast variables associated with the concepts of math. Math cannot be any good to this person who doesn't understand.

This principle applies to any 'system' whether it be math, music, love, life... etc.

Comment author: JohnEPaton 30 July 2012 05:34:22AM 0 points [-]

If a belief turns deadbeat, evict it.

This might be challenging because our beliefs tend to shape the world we live in thus masking their error. Does anyone have any practical tips for discovering erroneous beliefs?

Comment author: Nectanebo 30 July 2012 06:31:00AM 1 point [-]

The post you replied to is helpful advice for doing just that.

Above all, don't ask what to believe—ask what to anticipate.

When what you specifically anticipate doesn't line up with what happens, that's discovering a possible erroneuos belief.

Comment author: Mestroyer 22 June 2013 01:40:50PM 0 points [-]

What about things I remember from long ago, which no one else remembers and for which I can find no present evidence or record of besides those memories themselves?

Comment author: christopherj 03 October 2013 06:32:34PM 1 point [-]

Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.

What if I had the belief that a certain coin was unfair, with a 51% chance of heads and only 49% chance of tails? Certainly I could observe an absurd amount of coin flips, and each bunch of them could nudge my belief -- but short of an infinite number of flips, none would "definitely" falsify it. Certainly in this case, I could come to believe with an arbitrary level of certainty in the falsehood of the belief. But I don't believe that would apply in general -- what if to reach any arbitrary level of testing a belief, I'd need to think up and apply an indefinite number of unique tests? For example, a belief concerning the state of mind of another person -- I can't think of a definite test, nor can I repeat any test indefinitely to increase certainty.

On a related note, why abandon Bayes in this case for Popper, without any disclaimer? Eg falsificationism is useful because it fights magic explanations and positive bias, but it is still a predictive belief if observation causes you to slightly shift your probability for that belief.

Comment author: tylerj 03 January 2014 03:33:05PM 0 points [-]

What caused you to believe a 51 % chance of heads versus 49 % chance of tails?

Comment author: MathieuRoy 14 October 2013 02:17:33AM *  0 points [-]

Another example of these types of questions: "If a man who cannot count finds a four-leaf clover, is he lucky?" (Stanisław Jerzy Lec)

Comment author: tylerj 02 January 2014 02:47:54PM *  0 points [-]

Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian".

Suppose you, an invisible man, overheard 1,000,000 distinct individual humans proclaim "I believe that Velma Valedo and Wulky Wilkinsen are post-utopians based on several thorough readings of their complete bibliographies!"

Must there be some correspondence (probably an extremely complex connection) between the writings, and, quite possibly, between some of the 1,000,000 brains that believe this? The subjectively defined "post-utopian" does not hold much evidential weight when simply mentioned by one informed English professor, but when the attribute "post-utopian" is used to describe two distinct authors by many blind and informed subjects, does this (even a little bit) allow us to anticipate any similarities between (some of) the subjects' brains or between (some of) the authors' writings?