You are viewing a version of this post published on the . This link will always display the most recent version of the post.

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don't see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don't experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth's gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock's second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience, can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could oversimply their minds by drawing a little node labeled "Phlogiston", and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance. Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all. But you had better remember the propositional assertion that "Wulky Wilkinsen" has the "post-utopian" attribute, so you can regurgitate it on the upcoming quiz. Likewise if "post-utopians" show "colonial alienation"; if the quiz asks whether Wulky Wilkinsen shows colonial alienation, you'd better answer yes. The beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.  Do you believe that phlogiston is the cause of fire?  Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not "colonial alienation"; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you?  Do you believe that elan vital explains the mysterious aliveness of living beings?  Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.  It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can't find the difference of anticipation, you're probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don't know what experiences are implied by Wulky Wilkinsen being a post-utopian, you can go on arguing forever. (You can also publish papers forever.)

Above all, don't ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

 

Making Beliefs Pay Rent (in Anticipated Experiences)
New Comment
267 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Great post. As always.

I assume that most of math is being ignored for simplicity's sake?

6David_Allencourt
I think his point isn't so much that what you're saying WILL have a practical impact on your sensory experiences, just that it has the potential to do so. What you "expect" to experience as a result. In real life we can't weld a pair of trillion-pound bars of gold to each other and then see how much they weigh, but because of mathematics, we know that if we were to place them on an accurate scale we would see a weight of two trillion pounds.

What good is math if people don't know what to connect it to?

[-]VKS50-1

All math pays rent.

For all mathematical theorems can be restated in the form:

If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.

Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.

In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■


In practice:

If you demonstrate Pythagoras's Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.

If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.

If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.

If you understand why you can't trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.

and so on and so forth.

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.

5Daniel Clayton
Is this to say that one of the purposes of mathematics is to prove something new, even without knowing what it might be used for, with the awareness that it might be useful at a later point? Or that it might form part of a proof for something else that is also currently unknown? 
2MIN0010
Yes, there are numerous cases where a field in "pure" mathematics proved interesting theorems that mathematicians undertook because of its challenging and elegant nature (like certain theorems possess generality and elegance) which were then to be found to be practically useful, which are called "applied" mathematics. Frankly, this distinction is blurred as pure mathematics are so useful (see Eugene Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences") that the abstract nature of mathematics has huge extensibility and general applications in multiple domains. For instance, Einstein's GR was based on the pure mathematics of Riemannian manifolds, which is an abstract topological structure, not tied to reality in any way initially. Or how algebraic topology is used for data mining, how number theory is used for cryptography, how linear algebra is used for machine learning, group theory is used for particle physics... and even how Bayesian probability theory is used for LW rationality. Stephen Wolfram has great resources on rulial spaces and the nature of computation for the universe's fundamental ontology (the territory not the map) in which these networks of theorems can correspond to our empirical reality. (psst I am a very new LW user, and I am deciding if I should do a Sequence for this idea of "rulial cover" which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction, would be great if someone thinks this is interesting to explore so I can be motivated) To link back to Eliezer's post, "floating beliefs" in a Bayesian net can be connected through adjusting the "weights" of the edges that connects that belief using Bayesian inference, and mathematics make these robust inferences from axioms (deductively validity as 100% in weight and 0% in prior). Therefore, anticipation becomes certain under a set of idealized axioms.
2Bruno Vieira
'I am deciding if I should do a Sequence for this idea of "rulial cover" which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction'   I don't really know what you mean, but if it's something unseen you can expect it to be useful!
6jirkazr
Is it not the purpose of math to tell us "how" to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms. Math makes it very hard to reason about models incorrectly. That's why it's good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again. That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.
4A1987dM
http://abstrusegoose.com/504 :-)
0[anonymous]
Thats exactly how i felt in high school. Im glad i changed that because it wouldn't be useful to me if i'd never learned algebra. The first part of the class is hard to use and discouraging to new students.
-4TheAncientGeek
Is pure math a set of beliefs that should be evicted?
4g_pepper
No, for reasons expressed above by VKS.
0TheAncientGeek
Note the word "pure". By definition, pure maths doesn't pay off in experience. If it did, it would be applied.
2g_pepper
IMO the distinction between pure and applied math is artificial, or at least contingent; today's pure math may be tomorrow's applied math. This point was made in VKS's comment referenced above:
0TheAncientGeek
The question is whether anyone should believe pure maths now. If you are allowed to believe things that might possibly pay off, then the criterion excludes nothing.
3lalaithion
Metabeleifs! Applied math concepts that seem useless now, have, in the past, become useful. Therefore, the belief that "believing in applied math concepts pays rent in experience" pays rent in experience, so therefore you should believe it.
0g_pepper
Unlike scientific knowledge or other beliefs about the material world, a mathematical fact (e.g. that z follows from X1, X2,..., Xn), once proven, is beyond dispute; there is no chance that such a fact will be contradicted by future observations. One is allowed to believe mathematical facts (once proven) because they are indisputably true; that these facts pay rent is supported by VKS's argument.
1TheAncientGeek
Truths of pure maths don't pay rent in terms iof expected experience. EY has put forward a criterion of truth, correspondence, and a criterion of believability, expected experience , and pure maths fits neither. He didn't want that to happen, and the problem remains, here and elsewhere, of how to include abstract maths and still exclude the things you don't like. This is old ground, that the logical postivists went over in the mid 20th century.
0Richard_Kennaway
Here is a truth of pure mathematics: every positive integer can be expressed as a sum of four squares. Expected experiences: there will be proofs of this theorem, proofs that I can follow through myself to check their correctness. Et voilà!
1TheAncientGeek
Truth of astrology: mars in conjunction with Jupiter is dangerous for Leos Expected experience: there will be astrology articles saying Leo's are in danger when mars is in conjunction with Jupiter.
3Richard_Kennaway
Of course astrological claims pay rent. The problem with astrology is not that it's meaningless but that it's false, and the problem with astrologers is that they don't pay the epistemological rent. Also, a proof is a different thing from a mathematician saying so. The rent that is being paid there is not merely that the theorem will be asserted but that there will be a proof.
0TheAncientGeek
Try telling Eliezer
0Richard_Kennaway
The original post does not mention astrology. If you want to spy out some place where Eliezer has said that astrological claims are meaningless, go right ahead. I am not particularly concerned with whether he has or not. Here and now, you are talking to me, and as I pointed out, the belief can pay rent, but astrologers are not making it do so. Those who have seriously looked for evidence, have, so I understand, generally found the beliefs false.
0polymathwannabe
From that belief, the expected experience should be Leo people being less fortunate during those days.
0TheAncientGeek
That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it's a cheat expect to pure maths truths ...
3Richard_Kennaway
Let me complete the ellipsis with what I actually said. A mathematical assertion leads me to expect a proof. Not merely experiences of reading written materials repeating the assertion.
0TheAncientGeek
And a proof still isnt an .experience in the relevant sense. Its not like predicting an eclipse,
1wizzwizz4
What's the difference between behaviours of non-sentient objects and behaviours of sentient people that makes one an experience and the other not?
1Paolo Falabella
I think this is both right and not in contradiction with the post. The belief that pays the rent here is that there is going to be a high correlation between Mars being in conjunction with Jupiter and astrology believers born around August experiencing heightened feelings of being in danger. That does not say anything on the "truth" of astrology itself. Same applies to the article's example on Wulky Wilkinsen. The belief that alienated resublimation justifies the fictional author's retropositionality does not pay rent. The belief that failing to mention retropositionality correlates with higher chances of failing a literature test on Wilkinsen does probably pay rent.
1g_pepper
I think I see where you are going with this. My initial interpretation of EY's original post is that he was explicating a scientific standard of belief that would make sense in many situations, including in reasoning about the physical world (EY's initial examples were physical phenomena - trees falling, bowling balls dropping, phlogiston, etc.). I did not really think he was proposing the only standard of belief. This is why I was baffled by your insistence that unless a mathematical fact had made successful predictions about physical, observable phenomena, it should be evicted. However, later in the original post EY used an example out of literary criticism, and here he appears to be applying the standard to mathematics. So, you may be on to something - perhaps EY did intend the standard to be universally applied. It seems to me that applying EY's standard too broadly is tantamount to scientism (which I suspect is more-less the point you were making).
0Epictetus
If you believe in applied math, what are the grounds for excluding "pure" math? Most of the time "pure" just means that the mathematician makes no explicit reference to real-world applications and that the theorems are formulated in an abstract setting. Abstraction usually just boils down to figuring out exactly which hypotheses are necessary to get the conclusion you want and then dispensing with the rest. Let's take the theory of probability as an example. There's nothing in the general theory that contradicts everyday, real-world probability applications. Most of the time the general theory does little other than make precise our intuitive notions and avoid the paradoxes that plague a naive approach. This is an artifact of our insistence on logic. A thorough, logical examination of just about any piece of mathematics will quickly lead to the domain "pure" math.
-1TheAncientGeek
I am not making the statement "exclude pure math", I am posing the question "if pure math stays, what else stays?" Maybe post utopianism is an abstract idealisation that makes certain concepts precise.
3Epictetus
There are beliefs that directly pay rent, and then there are beliefs that are logical consequences of rent-paying beliefs. The same basic principles that give you applied math will also lead to pure math. We can justify spending effort on pure math on the grounds that it may pay off in the future. However, our belief in pure math is tied to our belief in logic. If you asked whether this can be applied to something like astrology, I'd ask whether astrology was a logical consequence of beliefs that do pay rent.

In practice, most of the time people figure out what to connect it to later. More precisely, most of it probably doesn't connect to anything, but what does connect to stuff usually isn't found to do so until much later than it is invented/discovered.

Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it's difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can't abolish all such concepts, only resist acquiring them.

For any instrumental activity, done to achieve some other end, it makes sense to check that specific examples are in fact achieving the intended end.

Most beliefs may have as their end the refinement of personal decisions. For such beliefs it makes sense not only to check whether they effect your personal experience, but also whether they effect any decisions you might make; beliefs could effect experience without mattering for decisions.

On the other hand, some beliefs may have as their end effecting the experiences or decisions of other creatures, such as in the far future. And you may care about effects that are not experienced by any creatures.

0[anonymous]
Only if you have reason to believe your naive pattern matching of expectations to observation isn't already updating your expectations about instrumental activity. Otherwise, your ''privileging the hypothesis'' that you are in fact wrong. It's kind of like smoothing in machine learning. It will have costs and benefits.

Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.

1[anonymous]
Maybe I'm inferring from too little data, but I suspect that most readers at this site aren't too interested in sceintific realism. Our favourite mantra ("the map is not the territory") acknowledges and then gracefully side-steps the issues that you're raising. (I just realized that Eliezeer answers this below. Comment retracted. Is there some way for me to delete this?)

Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

9Perplexed
I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can't do quantum mechanics with kets, but no bras. We can't do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can't do Bayesian updating with observations, but no priors. And I claim that you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true. This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger's main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.
0timtyler
That seems obviously correct. However, unless you pursue knowledge for its own sake, you should probably not be overly concerned with preserving past truths - unless they are going to impact on future decisions. Of course, the decisions of a future superintelligence might depend on all kinds of historical minutae that we don't regard as important. So maybe we should preserve those truths we regard as insignificant to us for it. However, today, probably relatively few are enslaved to future superintelligences - and even then, it isn't clear that this is what they would want us to do.
1Peter Pehlivanov
Perplexed, I'm not sure I understood what you meant by Or if I agree with it at all. Wouldn't statements about what actions make certain statements true simply be part of the first category? I don't see a problem with only having statements and their consequences. I see you've made this comment 12 years ago, so I don't know how you would stand on this today.
2mendel
An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence. Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.
1Manfred
Hm, yeah. The trouble is how the doctrine handles deductive logic - for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.
0Ty-Guy9
While I fully agree with the principle of the article, something stuck out to me about your comment: What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument: You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false. You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs. By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)
1CBHacking
I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte. If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here. If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true. As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and
2gjm
If you have an arbitrary proposition -- a random sequence of symbols constrained only by the grammar of whatever language you're using -- then perhaps it's about equally likely to be true or false, since for each proposition p there's a corresponding proposition not p of similar complexity. But the "beliefs" people are mostly interested in are things like these: * There is exactly one god, who created the universe and watches over us; he likes forgiveness, incense-burning, and choral music, and hates murder, atheism and same-sex marriage. * Two nearby large objects, whatever they are, will exert an attractive force on one another proportional to the mass of each and inversely proportional to the square of the distance between them. and the negations of these are much less interesting because they say so much less: * Either there is no god or there are multiple gods, or else there is one god but it either didn't create the universe or doesn't watch over us -- or else there is one god, who created the universe and watches over us, but its preferences are not exactly the ones stated above. * If you have two nearby objects, whatever force there may be between them is not perfectly accurately described by saying it's proportional to their masses, inversely proportional to the square of the distance, and unaffected by exactly what they're made of. So: yeah, sure, there are ways to pick a "random" belief and be pretty sure it's correct (just say "it isn't the case that" followed by something very specific) but if what you're picking are things like scientific theories or religious doctrines or political parties then I think it's reasonable to say that the great majority of possible beliefs are wrong, because the only beliefs we're actually interested in are the quite specific ones.

It's amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like 'sound') that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions... how many more?

I think you're being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I've never experienced a college English class; perhaps my innocent fantasies will be shaken then.

Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don't find that view very satisfying, because math seems to so fundamentally underly everything else - mathematical truths can't be changed by changing anything physical, for instance - but it's one way to make math compatible with anticipation.

9TsviBT
I think Eliezer's point was about the student. "Wulky Wilkinsen is a 'post-utopian'" could be meaningful, if you know what a post-utopian is and is not (I don't, and don't care). The student who learns just the statement, however, has formed a floating belief. We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become "floaty" and spawn further "floaty" beliefs. In my sociology class, we talk about how "Man in his natural state has liberty because everyone is equal". "Natural state", "liberty", and "equal" could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance - it's all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare: "Man in his natural state has liberty because everyone is equal" "Man in his natural state is equal because everyone has liberty" "When everyone has liberty and is equal, man is in his natural state" These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth. (Edit for minor grammar and formatting)

It's amazing how many forms of irrationality failure to see the map-territory distinction

Should have been "how many forms of irrationality result from failure...". Sorry.

I agree with those who say it's okay to figure things out later. If my music professor says a certain composer favors the Aeolian mode, I may not be able to visualize that on the spot but who cares? I can remember that statement and think about it later. Likewise with phlogiston, I have a vague concept of what it is and someday the alchemists will discover more precisely what's going on there.

Too much cognitive effort would be spent if, every time I thought about linear algebra, I had to visualize the myriad concrete instances in which it will be applied. I bet thinking in abstractions results in way more economical use of thinking time and thinking-matter.

In what way is the belief that beliefs should be grounded not a free-floating belief itself?

1adamisom
One way of answering might be to say that there is no separate "belief" that beliefs should be grounded. But i'm not sure. All I know is that the question annoys me, but I can't quite put my finger on it. It reminds me of questions like (1) the accusation that you can't justify the use of logic logically, or (2) the accusation that tolerance is actually intolerant - because it's intolerant of intolerance. There might be a level distinction that needs to be made here, as in (2) - and maybe in (1) though I think that's different.
1Danfly
(1) has come out of my mouth on a few occasions, albeit not in those exact words. It's normally after a few beers and I feel like playing the extreme skeptic a la David Hume, just to annoy everyone. I think the best way around it is to resort to the empirical argument and say that, in our experience, it is always right: Essentially the same thing Yudkowsky does with PA arithmetic here. Trying to find an argument against it which is truly "rationalist" in the continental sense has been a dead end in my experience. (2) sort of depends on the pragmatics and what "tolerance" actually means to the persons involved in a given context. If you define tolerance as simply being tolerant of other viewpoints, then you can still be tolerant of the intolerant viewpoints. However, if you define it as freedom from bigotry, then that could indeed be called "intolerant" by the standards of the first definition. I hope I'm making sense here.
1MarkusRamikin
I anticipate expressing free-floating beliefs would get me negative karma on Less Wrong. More seriously: I do not anticipate free-floating beliefs being useful in the same sense that maps of reality are useful. A map can turn out to be accurate or inaccurate, and insofar as it is accurate it can help me navigate and manipulate reality. My belief that "a proper belief should not be free-floating" prohibits free-floating beliefs from doing any of that. Or one might as well see it as not a belief, but as a definition. There's BeliefType1 which is grounded in reality, and BeliefType2 which is not, and we happen to call BeliefType1 a "proper belief". (Of course we still do it for a reason, because we care about our sheep, or rather, we care about our beliefs being true and thus useful.) Not sure which approach makes more sense.
0Klevador
The ability to anticipate experiences is one of our maximands because we have goals that are optimally achieved with this ability. To believe that beliefs should allow us to anticipate experiences is grounded in the desire to achieve our goals.

Mark: Believing that beliefs should be grounded anticipates that there is absolutely no change in anticipation if one were to change these free floating ideas. Of course this doesn't really answer your question because it just restates the definition of 'free floating beliefs' in different words. This belief actually follows from Eliezers belief in Occam's Razor, which predicts that when faced with unexplained events, if one creates a set of theories explaining these events, any predictions made by the simple theories are more likely to actually happen tha... (read more)

Jan: Occam's razor is not so much a rule of science but an operating guideline for doing science. It could be reduced to "test simple theories first". In the past this has been very useful in keeping scientific effort productive, the 'belief' is that it will continue to be useful in this way.

This led to a fun read of "occam's razor" wikipedia entry. Hickum's dictum in particular was a great find (generalized beyond medicine, it could be that explanations for unexplained events can be as complex as they damn well please). As a practical corrective, it seems to me that probability theory suggests that the best accessible explanation to us for unexplained events is in the set of simpler theories, but is probably not one of the absolute simplest.

Eliezer once wrote that "We can build up whole networks of beliefs that are connected only to each other - call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict - or better yet, prohibit."

I can't see how nearly all of the beliefs expressed in this post predict or prohibit any experience.

"Alchemists believed that phlogiston caused fire"

How is that different than our current belief that oxygen causes fire?

-1Jack
Uhhh... oxygen exists?
1DanielLC
And so does the absence of oxygen, or, as they called it, phlogiston.
9Nick_Tarleton
The absence of oxygen isn't much like a substance whose release is fire: * it doesn't have any consistent physical or chemical properties; * many things not containing oxygen fail to burn in air, and none burn in vacuum; * on the other hand, things do burn under oxidizers other than oxygen; * oxidized substances are very poorly modeled by mixtures of the original substance and oxygen; * things burned in open air can either gain or lose weight; etc.
1DanielLC
"it doesn't have any consistent physical or chemical properties;" And oxides do? Or are you referring to pure phlogiston? It's not that big a deal that you can't get pure phlogiston. It's nigh impossible to purify fluorine. I think that under our current understanding of physics, it's totally impossible to isolate a single quark. It moves because it's attracted to some things more than others. It's still attracted to everything more than itself. "many things not containing oxygen fail to burn in air" Hurts both theories equally. Presumably, it's strongly bonded to the phlogiston/it doesn't strongly bond to oxygen. "...and none burn in vacuum;" As I said, you can't get pure phlogiston. "on the other hand, things do burn under oxidizers other than oxygen;" Hurts both theories equally. The only way to solve it to my knowledge is that there are things that cause fire other than phlogiston/oxygen. "things burned in open air can either gain or lose weight;" Hurts both theories equally. Presumably, some of the matter escapes into the air sometimes. Everything you listed either is only a very minor problem or is exactly as bad for the idea of oxygen.
[-]Jack120

You're giving phlogiston qualities no one who held that theory gave it. If you want to call the absence of oxygen phlogiston, okay, but you aren't talking about the same phlogiston everyone else is talking about. Moreover, thinking about fire this way is clumsy and incompatible with the rest of our knowledge about physics and chemistry.

We already had a conception of matter when phlogiston was invented... and phlogiston was understood as a kind of matter. To say the phlogiston is really this other kind of thing, which isn't matter but a particular kind of absence of matter is both unhelpful and a distortion of phlogiston theory. The whole point of the phlogiston theory was that they thought there was a kind of matter responsible for fire! But there isn't matter like that.

Now by defining phlogiston as the absence of oxygen you might be able to model combustion in a narrow set of circumstances-- but you couldn't fit that model with any of your other knowledge about physics and chemistry.

In short neither the original kind nor your kind of phlogiston exist.

1DanielLC
It was at one point theorized to have negative mass. If it's matter, and you make everything else weigh more, it works out the same. I fail to see why you think it can't fit it with other knowledge of physics and chemistry. You can think of electricity as positively charged particles moving around with virtually zero loss of predicting power.
4Jack
For example, you can't use phlogiston in any model that also includes oxygen. Nor can you do any work at the molecular or sub-molecular level. Similarly, thinking of electricity in terms of positively charged particles would be incompatible with atomic theory.
-6DanielLC
4Sniffnoy
Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling. Now, if you make enough observations, maybe you'll eventually conclude that "phlogiston is the absence of oxygen" (even though this isn't really correct), but at that point you can throw out the label "phlogiston". Contrariwise, if you say "oxidization causes fire", where "oxygen" is a previously known thing with known properties, then this allows you to actually make predictions about fire. E.g. the fact a candle in a sufficiently small closed space will go out before it melts, but not necessarily if there's a plant in there too. One pays rent, the other doesn't.
3DanielLC
You can make exactly the same predictions with phlogiston. If you burn coal next to iron, it will refine it. You could predict this with oxygen (oxygen is moving from the iron to the coal) or with phlogiston (phlogiston is moving from the coal to the iron). It's like with electric charge. If you think of it as positive charge moving around, it has almost exactly the same predictive power as thinking of it as electrons moving around.
0[anonymous]
In this specific example and at that level of precision, yes; but only one of these models can be (easily) refined to make precise, correct quantitative predictions. Even at that qualitative level, though, they make different predictions about burning things in vacuum or in non-oxygen atmospheres.
3Sniffnoy
But you can only predict it if you already know that a gain of phlogiston refines iron; if you don't, you can only observe it afterward and write it down as a property of phlogiston. If you don't know anything about oxygen or phlogiston beforehand, then, sure, they're pretty much equally predictive, i.e., not very much. But if "oxygen" is not in fact just an arbitrary label as "phlogiston" is, but in fact something you're already working with in other ways, then they're not symmetric. Also as Nick Tarleton points out below there are other asymmetries, though those are not so much in the predictive power.
0DanielLC
"But you can only predict it if you already know that a gain of phlogiston refines iron" Same goes for oxygen.
-1DanielLC
Okay, I admit that that's not really a prediction, but until then, they couldn't even explain it. If you're going to do it like this, what's one thing oxygen predicted? By the way, I'm responding to the fact that I lost two karma points on that, not any actual post.
3Sniffnoy
That's what I just said.
3DanielLC
Sorry. Too used to defending my position to realize you're not attacking it.

Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling.

The hypothesis went a little deeper than that. "Flammable things contain a substance, and its release is fire" lets you make many predictions — e.g., that things will burn in vacuum, or that things burned in open air will always lose mass (this is how it was falsified).

0Sniffnoy
Ah, true.
-1DanielLC
Always gain mass, once they realized it was negative mass. The idea that it doesn't always gain mass doesn't falsify phlogiston any more than it falsifies oxygen for the same reason. Also, people didn't find the change in weight particularly useful, so this wasn't that big a problem. Again, the vacuum thing isn't much of a problem either. It's not necessarily possible to purify phlogiston.
4bigjeff5
I'm not sure I follow, oxidization doesn't predict gaining or losing mass (on any scale like phlogiston would, that is), it predicts an interaction of materials forming a new composite substance. Oxidation doesn't prevent material from being lost or changed in other ways which could cause an overall greater or lesser mass than the original object. What it does predict, however, is that the total mass of all molecules in the equation, once accounted for, will be the same. This is consistent with observation. If phlogiston has a negative mass, then anything that can burn must gain mass. I don't see any way around it. The theory states that it is a release of negative material, and there is no way to account for it once released. One thing you would expect to find with phlogiston is an object that was primarily made up of phlogiston, giving it a negative mass. Explosives, for example, clearly have so much phlogiston that it literally rips the object (and anything nearby) apart when released. You would therefore expect all explosives to be relatively light in spite of the original weight of their components. You could test this with black powder: saltpeter, charcoal, and sulfer each release a certain amount of phlogiston when burned. Combine them and a significantly more phlogiston is clearly released. You would therefore expect more phlogiston to have flowed into the material during the combination of the three objects during the making of gunpowder. However, the weights actually stay quite the same. The observation doesn't bear out the prediction, so the prediction is clearly wrong. If the prediction is wrong, the theory that made it is either wrong outright, or flawed in some way. Since the only prediction phlogiston can make is wrong, then the theory is at the very least flawed in some crippling way, and needs to be completely re-worked. It's lack of ability to predict expectations is what killed it. You can predict what will happen when you add oxygen to a reac
2thomblake
Just because I haven't seen the link in this particular discussion, some more defense of phlogiston link

I loved this post, but I have to be a worthless pedant.

If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.

8Eliezer Yudkowsky
Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and "fixed" it.
[-]Dpar40

What about knowledge for the sake of knowledge? For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way. Does that then mean that this belief is completely worthless and on the same level as the belief in ghosts, psychics, phlogiston, etc.?

Wouldn't taking your chain of reasoning to its logical conclusion require one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch? After all, how much personal sensory experience do you have that confirms the existence of atoms, for example?

DP

6RobinZ
I think Eliezer's point is less strong than you think: for one thing, reading a history book is a sensory experience, and fewer history books would proclaim that The Crusades occurred in worlds where they had not than in worlds where they had.
0Dpar
I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let's just call it a day. DP
3Vladimir_Nesov
This is something of a fallacy of gray. Of course we can read his mind, through the power of human telepathy, by reading more on the same topic. We can't read minds perfectly, but perfect knowledge is never available anyway, and unless you can point out the specific uncertainty you have that decides the discussion, there is no sense in requiring more detail. You might want to stop the discussion for other reasons, but the reason you stated rings false.
2Dpar
First of all, calling speech "human telepathy" strikes me as a little pretentious, as well as inaccurate, since the word "telepathy" is generally accepted to have supernatural connotations. Speech is speech; no need to complicate the concept. Secondly, the article you linked seemed a little rambling and without a clear point. All I was able to take away from it is that the meaning of words is relative. If that's the case then I respond with "well, duh!"; if I missed a deeper point, please enlighten me. Finally, when you take it upon yourself to question another person's purely subjective reasoning, you're treading very close to completely indefensible territory. If I say that I wanted to stop the discussion because I believe that the author's intended meaning is ambiguous, it's a tall order to question that that is indeed what I believe. Unless you can come up with clear evidence of how my behavior contradicts my stated subjective opinion, you more or less have to take my word that that really is what I think. DP
3thomblake
You misunderstand. Vladimir Nesov was not claiming that you don't believe that the author's intended meaning is ambiguous. Rather, he was claiming that your belief that "the author's intended meaning is ambiguous" is false, or at least not enough to constitute a good reason for stopping the discussion. The point of calling speech 'human telepathy' in this instance is that you claimed there's no way to know what the author was thinking since we "can't read his mind". But there is a way to know what the author was thinking to some extent, so by reading your own reasoning backwards we therefore indeed can read minds.
-1Dpar
I stated that taking the OP's reasoning to its logical conclusion requires one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch. RobinZ responded by saying that the OP's point is less strong than I think. Since two (presumably) reasonable people can disagree on what the OP meant, his point, as it is written, is by definition ambiguous. Where do we go from here other than debate what he really meant? What is the point of such debate since neither of us has any special insight into his thought process that would allow us to settle this difference of subjective interpretations? I believe that to be sufficient reason for stopping the discussion. I'm not sure what specifically Vladimir takes issue with here. As to your point of human telepathy -- comparing reading what someone wrote to reading his mind is a very big stretch. I can see how you could make that argument if you get really technical with word definitions, but I think that it is generally accepted that reading what a person wrote on a computer screen and reading his mind are two very different things. DP
5thomblake
Right, but RobinZ was not arguing against this claim (depending on what you mean by 'personally' here) but rather pointing out that your reasoning was flawed. RobinZ pointed out that your belief that the crusades took place affects your sensory experience; if you believe they happened, then you should anticipate having the sensory experience of seeing them in the appropriate place in a history book, if you were to check. If you thought that your belief that the crusades happened did not imply any such anticipated experiences, then yes, it would be worthless and on the same level as belief in an invisible dragon in your garage.
0Dpar
So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper. DP Edit: Also, I think that RobinZ didn't state that as something that she believed, she stated that as something that she believed the OP meant. It's that subjective interpretation of his position that I didn't want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.
4Oligopsony
What's important isn't the number of degrees of removal, but that the belief's being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts. Now you could say: "reading about the Crusades in history books is strong evidence that 'the Crusades happened' is the current academic consensus," and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data - for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.
-5Dpar
0Vladimir_Nesov
You are disputing definitions. Reading something in a book is a sort of thing you'd change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don't, they are sometimes too part of your model of reality, but it's a more subtle point. And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.
1Dpar
Very interesting read on disputing definitions. While the solution proposed there is very clever and elegant, this particular discussion is complicated by the fact that we're discussing the statements of a person who is not currently participating. Coming up with alternate words to describe our ideas of what "sensory experience" means does nothing to help us understand what he meant by it. Incidentally this is why I didn't want to get drawn into this debate to begin with. Also -- "consider me having a special insight into the intended meaning" -- on what grounds shall I consider your having such special insight?
2Cyan
At the bottom of the sidebar at the bottom, you will find a list of top contributors; Vladimir Nesov is on the list.
2Vladimir_Nesov
I've closely followed Yudkowsky's work for a while, and have a pretty good model of what he believes on topics he publicly discusses.
0Dpar
Fair enough. So if, on your authority, the OP believes that reading about something is anticipated experience, does that not then cover every rumor, fairy tale, and flat out non-sense that has ever been written? What then would be an example of a belief that CANNOT be connected to an "anticipated experience"?
3Vladimir_Nesov
See this comment on the first part of your question and this page on the second (but, again, there are valid beliefs that don't translate into anticipated experience).
1Dpar
I agree wholeheartedly that there are valid beliefs that don't translate into anticipated experience. As a matter of fact what's written there was pretty much the exact point that I was trying to make with my very first response in this topic. Does that not, however, contradict the OP's assertion that "Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it."? That's what I took issue with to begin with.
2Vladimir_Nesov
It does contradict that assertion, but not at first approximation, and not in the sense you took the issue with. You have to be very careful if a belief doesn't translate into anticipated experience. Beliefs about historical facts that don't translate into anticipated experience (or don't follow from past experience, that is observations) are usually invalid.
0Dpar
You seem to place a good deal of value on the concept of anticipated experience, but you give it a definition that's so broad that the overwhelming majority of beliefs will meet the criteria. If the belief in ghosts for instance can lead to the anticipated experience of reading about them in a book, what validity does the notion have as a means of evaluating beliefs?
2Vladimir_Nesov
When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It's reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it's not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows. The problem are the beliefs that purport to be speaking about reality, but really don't, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain. It is a great heuristic to treat any beliefs that don't translate into anticipated experience with utmost suspicion, or even to run away from them in horror.
0Dpar
How would you learn that there are no ghosts? You form the belief "there are ghosts" which leads to the anticipated experience (by your definition of such) that "I will read about ghosts in a book", you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?
4Vladimir_Nesov
That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you'd read about ghosts even though there is no such thing.
0Dpar
You're not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?
2Vladimir_Nesov
I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.
2Dpar
Alright, I think I see what you're getting it, but I still can't help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like "I have a third leg that I can never see or feel", did I get that about right?
2Vladimir_Nesov
Yes. It happens all the time. It's one way nonsense protects itself, to persist for a long time in minds of individual people and cultures. (More generally, see anti-epistemology.)
2Dpar
So essentially what you and Eliezer are referring to as "anticipated experience" is just basic falsifiability then?
6Vladimir_Nesov
With a bayesian twist: things don't actually get falsified, don't become wrong with absolute certainty, rather observations can adjust your level of belief.
3SilasBarta
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, "Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability." But the way I see it, this doesn't refute Popper, or the notion of falsifiability: it just means we've generalized the notion to probabilistic cases, instead of just the binary categorization of "unfalsified" vs. "falsified". This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
4Vladimir_Nesov
I reached much clearer understanding once I've peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don't constitute useful statements, so we work with the ones that can't be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
0[anonymous]
Isn't that like saying we've generalized the theory that "all is fire" to cases where the universe is only part fire? If falsification is absolute then Popper's insight that "all is falsification" is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It's not like Popper invented the notion that if a hypothesis is falsified we shouldn't believe it.
7Dpar
Ok, I understand what you mean now. Now that you've clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
1jimrandomh
Falsifiability can be quantified, in bits. If the only test you have for whether something's true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability. (This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I'm not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)
4SilasBarta
Just a note about terminology: "expected bits of evidence" also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.) And if I understand you correctly, you're saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them? If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability. But then, at that point, I'm not sure if you're measuring falsifiability per se, or just general "epistemic goodness". It's okay to have those characteristics you want as a separate desideratum from falsifiability.
2Dpar
Isn't it an essential criteria of falsifiability to be able to design an experiment that can DEFINITIVELY prove the theory false?
7RobinZ
That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that "you need to be able to define experiments where at least one result would be completely impossible by the theory", a Bayesian will tell you that "you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result". Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can "definitely prove" that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads. Edit PS: I am totally stealing the meme that "Bayes is a generalization of Popper" from SilasBarta.
1SilasBarta
Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out "Popper is so 70 years ago", the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.) Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.
3thomblake
I'm pretty sure that was handily discussed in An Intuitive Explanation of Bayes's Theorem and A Technical Explanation of Technical Explanation.
0RobinZ
Ehhcks-cellent!
7SilasBarta
Fair point, and it was EY's essay that showed me the connection. But keep in mind, the point of the essay is, "Bayesian inference is right, look how Popper is a crippled version of it." My point in saying "my" meme is different: "Popper and falsificationism are on the right track -- don't shy away from the concepts entirely just because they're not sufficiently general." It's a warning against taking the failures of Popper to mean that any version of falsificationism is severely flawed.
5JoshuaZ
As Robin's explained below Bayesianism doesn't do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can't falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.
4SilasBarta
Yes, but that still doesn't show falsificationism to be wrong, as opposed to "narrow" or "insufficiently generalized". Lakatos and Quine have also failed to show how it's a problem that you can't rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper's binary "falsified vs. unfalsified" to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations. The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.
2JoshuaZ
Right, I was speaking in a non-Bayesian context.
1satt
If I remember rightly, that's where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn't work out how to rigorously make it flexible. And as no experiment's exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said. But more recent work in philosophy of science suggests a more sophisticated way to talk about how falsifiability can work in the real world. The key idea is "severe testing", where a "severe test" is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don't. This approach also disarms the "auxiliary hypotheses" objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn't be surprised to find out that's roughly equivalent to the Bayes net approach SilasBarta mentioned.)
2anon895
I was expecting the link to be Mundane Magic.
0Vladimir_Nesov
The point is not that the ability is "magical", but that it's real, that we do have an ability to read minds, in exactly the same sense as Dpar appealed to the impossibility of.
3RobinZ
Belatedly: Welcome to Less Wrong! Please feel free to introduce yourself.
0Dpar
A belated thanks! :) DP
2MarsColony_in10years
The LessWrong FAQ says that there is value in replying to old content, so I'm commenting in hopes that it is useful to someone in the future, and just for the sake of organizing my thoughts. I would have phrased this differently than Yudkowsky, but I think I understand the concept he was getting at when he gave this example: His point is that this is just semantics. It makes no difference to the world whether we label something "post-utopian" or "aegffsdfa eereraksrfa" or anything else. The words you read in the book will be the same. The reason I don’t like this example is that, if I actually knew some literary jargon, I might get some real verifiable information that does actually mean I should expect a specific kind of sensory experience. It’s just that the classification scheme is arbitrary, and so is my belief that one classification scheme is "correct". The label is just a label, so arguing about classification schemes is just semantics. Using this definition, your belief that the crusades took place would affect what sorts of things you would expect to read, and what sorts of archeological finds you would expect to find if you went looking for them. However, if you believe that the crusades marked the beginning of the high middle ages, that would just be semantics. We could say that the middle ages started at the sacking of Rome, or we could make a label like "dark ages" to describe the intermediary period. What we call it and how we classify it makes no difference in the actual reality of history. It's just semantics.
4TheAncientGeek
Semantic labels are part of the structure of an explicit model. For instance, the Chinese use the same word for both "rat" and "mouse". A model with a ratmouse vertex will behave differently to a model with separate rat and mouse verteces. The structure and function of model affect what it predicts, what it's users can notice, how they behave. Agents do not passively receive a stream of predetermined experiences, they interact with the world, and the experiences they can expect depend on the structure and function of their models... ..and more besides. Models contain evaluative weightings as well as neutral structure. For instance, in the English speaking world, mice have the connotation of being cute, rats of being vermin. The professor might not be failing to specify an empirical confirmable concept when describing the writer as a post utopian: she might rather be succeeding in tweaking her students' evaluative model. She might be aiming at making a social or political point. There is a long history of the political influence of language ranging from Greek rhetoricIan's to Orwell' s essays. A STEM type might consider it pointless, to focus on such issues, rather than what can be proved objectively. A humanities type might also consider it pointless to focus on objective, empirical claims with no social or political upshot. Neither complaint is really about meaningfullness or semantics, in the sense if the meaningfulness of the words, rather they are both about the subjectively evaluated pointfulness of an activity. By a convoluted meta level irony, the way the way the term "semantics" is often used is itself a way if funneling the reader towards a conclusion. We have seen that there are circumstances where a semantic change would make a difference: where it makes a structural/functional change, and where it makes an evaluative/connotational difference. Since these circumstances don't always to apply, there are circumstances where a semantic change really is tri
0MarsColony_in10years
Thanks for breaching that topic. I considered pointing out that my "aegffsdfa eereraksrfa" example might be more difficult to pronounce than "post-utopian", and so actually would have an impact on the world in general. On reflection, I decided to make the assertion that it "makes no difference", since that would spare a lot of confusion. It's a good first order approximation. When introducing a topic, it's important to take the Bohr model view of the world before trying to explain quarks and leptons. The entanglement of semantic language with our interpretation of reality clouds things. Scientific language is precise, but often dry and hard to understand. However, by de-coupling the two worlds, we study the underlying reality without those (or perhaps with only minimal) distorting effects from our language. That's what we are doing when we talk about Map and Territory here on LW. We get a better map from this, but if we also compare the collective maps of societies to the best maps of reality, we can look for systematic differences. Some of these are cognitive biases, which we tend to concentrate on here on LW. However, there are also many other interesting or useful things that we can learn about ourselves as mapmakers. For example, the Bouba/kiki effect might help us choose more intuitive vocabulary as we build a more and more extensive set of jargon. Just studying the way languages evolve can be informative, whether it's rigorously using Computational Linguistics or informally by an author or artist. The mere existence of a formal scientific understanding of reality allows a poet or philosopher, if they are familiar only with the answers but not the underlying explanations, to look at some facet of human nature and ask "isn't it odd when people...". A great deal of social commentary is built from that one question.

You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”

I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.

There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.

Utopianism is the belief that an ideal society (or at least one that's much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, i... (read more)

Would you consider Le Guin's The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.

3tog
Not to go off on a tangent, but I'd say it's more utopian than critical of utopia - I don't think we can require utopias to be perfect to deserve the name, and Anarres is pretty (perhaps unrealistically) good, with radical (though not complete) changes in human nature for the better.
2Jack
Brave New World is definitely dystopian, not post-utopian. Nancy's suggestion for post-utopian is exactly right. I definitely agree that we can meaningfully classify cultural production, though.

I think it's both. "Brave New World" portrays a dystopia (Huxley called it a "negative utopia") but it's also post-utopian because it displays skepticism towards utopian ideals (Huxley wrote it in reaction to H. G. Wells' "Men Like Gods").

I don't claim any expertise on this subject: in fact, I hadn't heard of post-utopianism at all until I read the word in this article. It just seemed to me to be overstating the case to claim that a term like this is meaningless. Vague, certainly. Not very profound, yes. But meaningless, no.

The meaning is easily deducible: in the history of ideas "post-" is often used to mean "after; in consequence of; in reaction to" (and "utopian" is straightforward). I checked my understanding by searching Google Scholar and Books: there seems to be only one book on the subject (The post-utopian imagination: American culture in the long 1950s by M. Keith Booker) but from reading the preview it seems to be using the word in the way that I described above.

The fact that the literature on the subject is small makes post-utopianism an easier target for this kind of attack: few people are likely to be familiar with the idea, or motivated to defend it, and it's harder to establish what the consensus on the subject is. By contrast, imagine trying to claim that "hard science fiction" was a meaningless term.

Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It's really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.

6TheOtherDave
Yes. That said, it's not too surprising that postmodernists are often the straw opponent of choice. The idea that the categories we experience as "in the world" are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise. I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would. Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought. I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.
4David_Gerard
Oh yeah. While it's far from a worthless field, and straw postmodernists are a sign of lazy thinking, it is also the case that postmodernism contains staggering quantities of complete BS. Thankfully, these are also susceptible to postmodernist analysis, if not by those who wish to keep their status ...
-1BarbaraB
I played a mental game trying to make predictions based on the information, that Wulky Wilkinsen is post-utopian and shows colonial allienation - never heard of any of that before :-). Wulky Wilkinsen is post-utopian ... I expect to find a bunch of critically acclaimed authors, who wrote their most famous books before Wulky wrote his most famous books (5 - 15 years ahead ?), lived in the same general area as Wulky, and portrayed people who were more altruistic and prone to serve general good than we normally see in real life. It does not say too much about the actual writing style of Wulky - he could have written either in the similar way as "the bunch" (utopians), or just the opposite - he could have been just fed up by the utopians' style and portray people more evil than we normally see in everyday life. So my prediction does not tell what Wulky's books feel like, but it is still a prediction, right ? Colonial allienation - the book contains characters that have lived in a colony (e.g. India) for a long time (athough they might have just arrived to the "maternal" colonial country, e.g. Britain). These characters are confronted with other characters that have lived in the "maternal" colonial country for a long time (athough they might have just arrived to the colony :-) ). There are conflicts between these two groups of people, based on their background. They have different preferences when they are making decisions, probably involving other people. Thus they are allienated. Do not tell me this was not the point of Eliezer's post, let me just have some fun !
[-]Leafy-10

How is this not just a simple arguement on semantics (on which I believe a vast majority of arguements are based)?

They both accept that the tree causes vibrations in the air as it falls, and they both accept that no human ear will ever hear it. The arguement appears to be based solely on the definition, and surrounding implications, of the word "sound" (or "noise" as it becomes in the article) - and is therefore no arguement at all.

3bigjeff5
I think that may have been the point: You can define a thing based on any criteria you like. It simply has to allow your expectations to agree with reality in order for it to be true. One says "it is sound because it vibrates regardless of whether anyone hears it." This person believes that sound is the vibrations. The other says "it is not sound because it is never processed in a mind." This person does not deny that the vibrations exist, he simply believes it isn't sound until someone hears it. These two have different definitions of "sound", but within their definitions both allow expectations that are completely consistent with reality. The point is to make sure your beliefs "pay rent" - that they allow you to have expectations that match up with reality. If the second person had the same belief of what sound was as the first (i.e. vibrations in the air), yet also believed that vibrations in the air do not occur when there is nobody to hear them, that belief would not pay rent. When they recorded the sound with nobody around he would expect there to be nothing at all on the tape, yet there would be something on the tape. The only way to resolve this is to adjust your belief after the fact, which means your belief couldn't pay its rent.
2Rain
This video has sound problems which immediately turned me off wanting to try and parse what he's saying. I suggest using a microphone and properly syncing the sound if they intend to do many more of these.

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his book? Nothing."

When I first read this I thought, "Huh? Surely it tells you something, because I already have beliefs about what 'utopian' probably means, and what the 'post' part of it probably means, and what context these types of terms are usually used in... That sounds like a whole bag of reasons to expect certain things/themes/ideas in his book!"

But I think ... (read more)

3ata
Free-floating beliefs have to at least feel like beliefs. You can't even think you have a belief about whether Wulky Wilkinsen is a barnbeanbaggle unless you think you have some idea of what "barnbeanbaggle" is being used to mean. The thing about using a made-up word is that it's too easy to notice that you don't know what to anticipate from it. The thing about "post-utopian" is that, even if you have some idea of what "post-utopian" is supposed to mean, being told (by someone you perceive as sufficiently authoritative) that a certain author is "post-utopian" is quite likely to just make you selectively interpret that author's works to fit that schema. Similar to how you can make professional wine tasters describe a white wine the way they usually describe red wines by dying it red.
1alexvermeer
The made-up word being too easy to notice is a good point. 1. "I believe Wulky is a post-utopian." 2. "The professor says Wulky is a post-utopian, and I expect to figure out what the term means and confirm or disconfirm this claim by reading his book." When I first read this post I thought (2), and if I understand it right, the post is attacking (1). I may be getting too tied-up with the labels being used...
0Will_Sawin
You originally misunderstood Eliezer's point, and now understand it. If many people will similarly misunderstand it, that is a reason for Eliezer to change it on lesswrong or if/when it appears in his book. If you are relatively unusual, it is only a weak reason. Reasons not to change it would be a lack of viable alternatives. Can we think of an alternative better than "post-utopian" or "barnbeanbaggle"? For example, a less meaningful term from literary theory or another field?
1BarbaraB
My boyfriend just suggested "metaspontaneity" !
0BarbaraB
The Mighty Handful ?

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautifu... (read more)

6Spurlock
It's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department. In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality. Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.
4Manfred
The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end. However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.
1TheOtherDave
Well, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition. But, admittedly, he might not. Anyway, sure, if Joe "terminally" values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world. That said, I don't actually know anyone for whom this is true.
0MoreOn
I don't know too many theist janitors, either. Doesn't mean they don't exist. From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.
9jimrandomh
They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards has a partial taxonomy of them.
0HonoreDB
I don't think it's possible for a reflectively consistent decision-maker to gain utility from self-deception, at least if you're using an updateless decision theory. Hiding an unpleasant fact F from yourself is equivalent to deciding never to know whether F is true or false, which means fixing your belief in F at your prior probability for it. But a consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.
5jimrandomh
No, this is not true. Many of the reasons why true beliefs can be bad for you are because information about your beliefs can leak out to other agents in ways other than through your actions, and there is is no particular reason for this effect to be linear. For example, blocking communications from a potential blackmailer is good because knowing with probability 1.0 that you're being blackmailed is more than 5 times worse than knowing with probability 0.2 that you will be blackmailed in the future if you don't.
0HonoreDB
Oh, sure. By "gain utility" I meant "gain utility directly," as in the average Joe story.
2jimrandomh
I don't think it's linear in the average Joe story, either; if there's one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.
1HonoreDB
A rational agent can have its behavior depend on a threshold crossing of belief, but if there's some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.
2jimrandomh
This doesn't sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.
0HonoreDB
An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P--Joe's behavior should be identical if U(p)=p^2, so for simplicity I'll ignore the C. Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he'll pay up to that amount for it. Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe's overall chance of becoming attractive is .75, so he'll pay U(.75)-U(0)=.75^2=0.5625 for the deal. Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he'll pay .5625-.25=.3125 for the upgrade. Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit. As a sanity check, let's look at how it would go if Joe's U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.
3jimrandomh
You're missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5. Suppose Joe is uncertain whether he's attractive or not - he assigns it a probability of 1/3. Someone offers to tell him the true answer. If Joe's utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2/9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = -0.244, so he plugs his ears.
1HonoreDB
Good point. I agree here. But I still suspect that if your U(p) is anything other than linear on p, you can get Dutch-booked. I'll try to come back with a proof, or at least an argument.
4HonoreDB
Okay, here we go. I've possibly reinvented the wheel here, but maybe I've come up with a simple, original result. That'd be cool. Or I'm interestingly wrong. ---------------------------------------- We wish to show that superlinear utility-of-belief functions, or equivalently ones that would cause an agent to prefer ignorance, lead to inconsistency. Suppose Joe equally wants to believe each of two propositions, P and Q, to be true, with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x. Without loss of generality, we set U(0) to 0 and U(1) to 1. Both propositions concern events that will invisibly occur at some known future time. Joe anticipates that he will eventually be given the following choice, which will completely determine P and Q: Option 1: P xor Q. Joe won't know which one is true, so he believes each of them is true with probability 1/2. So he has U(1/2)+U(1/2)=2*U(1/2) utility. By assumption this is greater than 1. So let 2*U(1/2) - 1 = k. Option 2: One proposition will become definitely true. The other will become true with probability p, where p is chosen to be greater than 0 but less than U-inverse(k). Joe will know which proposition is which. Joe's utility would be less than U(1) + U(U-inverse(k)), or less than 1 + 2*U(1/2) - 1, or less than 2*U(1/2). Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2*U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2*U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to x*U(1). If he were to modify his utility function such that U'(x) = x*U(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function. Thus we can say that all superlinear utility functions are inherently unstable, in that an agent with U(x) > x*U(1) for all probabilities x, and U(x) strictl
1HonoreDB
Apologies; I realize this is both not very clearly written, and full of holes when considered as a formal proof. I have a decent excuse in that I had to rush out the door to go to the HPMOR meetup right after writing it. Rereading it now, it still looks like a sketch of a compelling proof, so if neither jimrandomh nor any lurkers see any obvious problems, I'll write it up as a longer paper, with more rigorous math and better explanations.
0tog
Did you ever end up writing it up? I think I'd follow more easily if you went a little slower and gave some concrete examples.
1nshepperd
That's interesting. The one problem that I have is it's rather unclear when a belief is evaluated for the purposes of utility. Which is to say, does Joe care about his belief at time t=now, or t=now+delta, or over all time? It seems obvious that most utility functions that care only about the present moment would have to be dynamically inconsistent, whether or not they mention belief.
0HonoreDB
Thanks, that's a good point. In fact, it's possible we can reduce the whole thing to the observation that it matters when utility of belief function is evaluated if and only if it's nonlinear.
3jimrandomh
Thanks for taking the time to try puzzling this out, but I suspect it's just interestingly wrong. The magic seems to be happening in this paragraph: I don't see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I'm also not sure it's possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It's only utility under the old function that matters - changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
0HonoreDB
Joe doesn't know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p. Not sure what you mean here. It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U', but Joe-with-U' can't maximize U' by changing back to U.
0NancyLebovitz
Is there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don't think that makes for an important difference.
0MoreOn
"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison. I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.
0JGWeissman
In this example, Joe's belief that he's smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly. The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe's beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.
0MoreOn
If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent. In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?
0JGWeissman
No, for an example of beliefs that don't pay rent in any anticipated experience, see the first 3 paragraphs of this article:
2MoreOn
Two people have semantically different beliefs. Both beliefs lead them to anticipate the same experience. EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.
3JGWeissman
Taboo "semantically". See also the example of The Dragon in the Garage, as discussed in the followup article.
1MoreOn
Taboo'ed. See edit. Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."
1Steven_Bukal
Or they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.
4buybuydandavis
I think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it. Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.
1Viktor Riabtsev
I am going to try and sidetrack this a little bit. Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success. I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the wrong way to think of this subject. Joe can go in front of a mirror and proceed to tell/chant to himself 3-4 times: "I am smart! I am beautiful! Mom always said so!". Is he not in fact, simply pumping himself up? Does it matter that he isn't using any coherent or quantitative evaluation methods with respect to the terms of "smart" or "beautiful"? Is he not simply trying to improve his own morale? I think the right way to describe this situation is actually: "Joe delivers self motivational mantras/speeches to himself" and believes that this is beneficial. This belief does pay in anticipated experiences. He does feel more confident afterwards, it does make him more effective in conveying himself and his ideas in front of others. Its a real effect, and it has little to do with a false belief that he is actually "smart and beautiful".

This post probably changed the way I regulate my own thoughts more than any other. How many arguments I have heard never would have happened if everyone involved read this...

Based on this, I would very much like to make a variant of Monopoly, with beliefs/theories in place of properties, and evidence for money. Invest a large chunk to establish a belief, with its rent determined by sophistication and usefulness of prediction, ranging from Aristotelian physics to relativity, spermatists & ovists to Darwinian evolution, and so on. Other players would have to give you some credit when they land on your theories, and admit that they give results.
This would also be a great way to teach some history of science, if well designed.
Of course, the analogy becomes interesting when you consider what corresponds to the cutthroat capitalism...

I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).

  • The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better under

... (read more)

I think that this is really a discussion of explanatory power, of which scientific causation is one example. All theories attempt to explain a set of examples. Scientific theories attempt to explain causation in natural phenomena, thus their "explanatory power" is proportional to their predictive power. A unified theory of forces at the planetary and subatomic levels would explain more examples than any do now, thus it would have great explanatory power.

Yet causation isn't the only type of explanatory relationship. Causation implies time and eve... (read more)

2Alicorn
Aaaaaaaaugh.
0allenpaltrow
I'm not trying to define the terms, just posit a very very simple theory of the form killing is wrong because human life is good. Such a theory would be inferior on its own premises than a very very simple utilitarianism, regardless of whether either or the premise itself is true. As such I oversimplified utilitarianism just as much, but it doesn't matter for the scope of the example. Edit: in fact, for the purposes of the example it is better if the "deontologist" is wrong about deontology, because it better illustrates how one theory can have greater explanatory power than another only on the grounds of the former's justification without reference to external verifiability. "human life is good" is a poor first principle, but if it is true, the utilitarian's principle applies it better than the "deontologist's" did.
0Alicorn
Someone who believes that killing is wrong because human life is good is not a deontologist. See here.
0allenpaltrow
Here the deontologist is arguing for the principe 'killing is wrong regardless of the consequences' (deontic) but uses a poor justification for which consequentialism is a more reasonable conclusion. So the 'deontologist' is wrong even though his principle cannot be externally verified. I was just (unclearly I see) using this strawman to illustrate how theories could be better and worse at explaining what they attempt to explain without being the sorts of things which can be proven. I will attempt to be clearer in future.

Wonderful exposition of versificationism (I meant verificationism lol, but I won't change it cause I like the reply bellow). I do have a question though. You said:

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly.

Well yes, we don't directly observe atoms (actually we do now but we didn't have to). But it is still save to say that if a belief doesn't make predictions about future sensory... (read more)

3gjm
Versificationism is presumably the doctrine that the truth of a proposition should be evaluated on the basis of how easily it can be expressed in poetic form. Empirically, this seems to favour any number of probably-untrue beliefs, so I'm inclined to reject it. :-) I have in fact seen something a little like this, in a more sophisticated form, maintained seriously. For instance, here's Dorothy L Sayers (the context is her series of radio plays "The man born to be king"). "From the purely dramatic point of view the theology is a enormously advantageous, because it locks the whole structure into a massive intellectual coherence. It is scarcely possible to build up anything lop-sided, trivial or uinsound on that steely and gigantic framework. [...] there is no more searching test of a theology than to submit it to dramatic handling; nothing so glaringly exposes inconsistencies in a character, a story, or a philosophy as to put it upon the stage and allow it to speak for itself. [...] As I once made a character say in another context: 'Right in art is right in practice'; and I can only affirm that at no point have I yet found artistic truth and theological truth at variance." And, though I disagree with her entirely on the truth of the sort of theology she's writing about, I think she does actually have a point of sorts. But a professional writer of fiction like Sayers really ought to have known better than to suggest that truth can be distinguished from untruth by seeing how easily each can be formed into art.

A related epistemology that is popular in the business world is PowerPointificationism, which holds that the truth of a proposition should be evaluated by how easily it can be expressed in PowerPoint. Due to the nature of PowerPoint as a means of expression, this epistemology often produces results similar to those of Occam's sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam's razor, Occam's sand-blaster does not require that the explanation be consistent with observation).

7TheOtherDave
...and I just spit coffee on my keyboard. That's marvelous... is that original with you?
2fubarobfusco
I take it you're familiar with Edward Tufte's "The Cognitive Style of PowerPoint"?

Good article. Some thoughts:

I probably constrain my experiences in lots of ways that I don't even know about, but I don't think there's always a way to know whether a belief will constrain your experiences, even if it is based on empirical (or even scientific) observation. Isaac Newton's beliefs constrained all of our beliefs for centuries. Scholars were so unwilling to question classical mechanics that they came up with this "ether" stuff that could never be observed directly, and thus didn't further constrain their experience, but had the ni... (read more)

8A1987dM
Global Positioning System
[-]Ab310

I understand that having beliefs that are falsifiable in principle and make predictions about experience is incredibly important. But I have always wondered if my belief in falsifiability was itself falsifiable. In any possible universe I can imagine it seems that holding the principle of falsifiability for our beliefs would be a good idea. I can't imagine a universe or an experience that would make me give this up.

How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can't tell what. Please help!

3TheOtherDave
Excellent question! Excellent, because it illustrates the problem with "believing in" the principle of falsifiability, as opposed to using it and understanding how it relates to the rest of my thinking. Forget that the principle of falsifiability is itself incredibly important. What sorts of beliefs does the principle of falsifiability tell me to increase my confidence in? To decrease my confidence in? What would the world have to be like for the former beliefs to be in general less likely than the latter?
0Ab3
Thanks for the reply Dave. Are you saying I should not look at falsifiability as a belief, but rather a tool of some sort? That distinction sounds interesting but is not 100% clear to me. Perhaps someone should do a larger post about why the principle should not be applied to itself. I have also thought of putting the problem this way: Eliezer states that the only ideas worth having are the ones we would be willing to give up. Is he willing to give up that idea? I don't think so..., and I would be really interested to know why he doesn't believe this to be a contradiction.
4TheOtherDave
What I'm saying is that the important thing is what I can do with my beliefs. If the "principle of falsifiability" does some valuable thing X, then in worlds where the PoF doesn't do X, I should be willing to discard it. If the PoF doesn't do any valuable thing X, then I should be willing to discard it in this world.
0Ab3
It seems we have empirical and non-empirical beliefs that can both be rational, but what we mean by “rational” has a different sense in each case. We call empirical beliefs “rational” when we have good evidence for them, we call non-empirical beliefs like the PoF “rational” when we find that they have a high utility value, meaning there is a lot we can do with the principle (it excludes maps that can’t conform to any territory). To answer my original question, it seems a consequence of this is that the PoF doesn’t apply to itself, as it is a principle that is meant for empirical beliefs only. Because the PoF is a different kind of belief from an empirical belief, it need not be falsifiable, only more useful than our current alternatives. What do you think about that?
1TheOtherDave
I think it depends on what the PoF actually is. If it can be restated as "I will on average be more effective at achieving my goals if I only adopting falsifiable beliefs," for example, then it is equivalent to an empirical belief (and is, incidentally, falsifiable). If it can be restated as "I should only adopt falsifiable beliefs, whether doing so gets me anything I want or not" then there exists no empirical belief to which it is equivalent (and is, incidentally, worth discarding).
0TimS
For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable. Because falsifiability is a definition, not a theory about the world, there's no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that's conceptually different.
0Jayson_Virissimo
Falsifiability is a very bad way to define science (or scientific theories). If falsifiability was all it took for a theory to be scientific, then all theories known to be false would be scientific (after all, if something is known to be false, it must be falsifiable). Do we really want a definition of science that says astrology is science because it's false?
0JoachimSchipper
Astrology does seem to consist of scientific hypotheses.
0Jayson_Virissimo
I chose astrology because it has a reverse halo effect around here (and so would serve me rhetorically). Feel free to replace it with any other known to be false set of propositions.
0TimS
I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics. That said, Newton's physics and geocentric theories are false. Are they not science simply for that reason?
0Jayson_Virissimo
Yes. Falsifiability is a poor definition of science and is self-undermining in the sense that it can't pass its own test. Of course not. I'm not claiming a scientific theory must be true. I'm claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.
0TimS
That statement does not itself constrain experience. That's not a useful critique of the statement. Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false. Falsifiability is simply the requirement that a scientific theory to list things that can't happen under that theory. Falsifiability says scientific theory don't look for evidence in support, they look for evidence to test the theory. The fact that no false statements appear doesn't mean that the scientific theory isn't falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.
3gwern
That doesn't seem true. The statement seems to perfectly constrain experience: you will not experience situations where theories which do not constrain experience will still be falsified. And indeed, watching the world go by over the years, I see theories like 'Christianity' or 'psychoanalysis' which do not constrain experience at all have yet to be falsified - exactly as predicted.
0TimS
Fine, you want to be contrary. What experience would falsify the partial definition of scientific theory that I have labelled "the principle of falsifiability"? If no such experience exists, does this call into doubt the usefulness of the principle?
1gwern
Are you even trying here? Here's what would falsify falsifiability: observing superior predictions being made by unfalsifiable theories, theories which have no reason to work but which do. Imagine a Christianity which came with texts loaded with prophetic symbolism which could be interpreted any way and is unfalsifiable, but which nevertheless keep turning out literally true (writes my hypothetical self, as he is tormented by Satanic wasps with the faces of humans prior to the sea turning into blood or something like that). In such a universe, falsifiability would be pretty useless.
0TimS
Isn't that essentially the best case for things like Nostradamus? Even assuming that the prophecies are accurate, they aren't useful because they are so vague. The moment that the predictions are specific enough to be useful, they could be falsified. What use is it to call that science? How could it possibly produce superior predictions in a world in which science works at all?
0gwern
Yes, that is rather the question you should be answering if you want to criticize the desirability of falsifiability as being unfalsifiable itself...
0TimS
I don't understand where we disagree, so let me clarify my position: A prophecy that is so vague that it can't be disproved is so vague that it doesn't tell you what will happen ahead of time. Calling that a prediction abuses the term to the point of incoherency. Yes, that's almost entirely a definitional point. Definitions aren't necessarily empirical statements. They are either useful or not useful in thinking carefully. Thus, the fact that they cannot be falsified is not a relevant thing to say, in the same way that it isn't useful to object that the Pythagorean theory can't be falsified. If you intend to invoke some other critique of Popper and his use of falsifiability to distinguish science from non-science, please by more explicit, because I don't understand your argument.
0Jayson_Virissimo
Nothing in this reply contradicts anything I have asserted. I was merely claiming that if falsifiability is a sufficient condition for a hypothesis to be "scientific", then all theories known to be false are scientific (because if we know they are false, then they must be falsifiable). I'm not being contrarian; I'm pointing out a deductive consequence of the very definition of falsifiability that you linked to. Hopefully this closes the inferential distance: * If a hypothesis is falsifiable, then it is scientific. * If a hypothesis is known to be false, then it is falsifiable. * Therefore, if a hypothesis is known to be false then it is scientific. I am merely denying the first premise via reductio ad absurdum, because the conclusion is obviously false (and the second premise isn't). If you took my claim to be something other than this, then you have simply misread me.
2TimS
That's much clearer. I didn't intend to assert that falsifiability was a sufficient condition for a theory being scientific, only that it is a necessary condition. That's what I mean by saying it was a partial definition. Thus, I don't intend to assert the first sentence of your syllogism. Instead, I would say, "If a hypothesis is not falsifiable, then it is not scientific." Adding the second statement yields: "If a hypothesis is know to be false, then it might be scientific." That's a true statement, but I don't claim it is very insightful.
2nshepperd
*shrug* I don't think the current line of enquiry is particularly useful. "Astrology works" is a scientific theory to the degree that it is, in fact, acceptable science to do an experiment to see whether or not astrology has predictive power. It's rhetorically inaccurate to say that means "astrology is science" though, because of course the practice of astrology is not. But sure, it's probably a good idea to include other conditions. Excessively unlikely (or non-reductionist?) hypotheses could be classified as non-scientific, for the simple reason that even considering them in the first place would be a case of privileging the hypothesis. None of this contradicts falsifiability being "a way of distinguishing scientific theories about the world from other theories about the world", if we have other ways of distinguishing scientific from non-scientific, such as "reductionism".
2[anonymous]
You have just refuted the contention that all warranted beliefs must be falsifiable in principle. Karl Popper, who introduced the falsifiability criterion and pushed it as far if not further than it can go, never advocated that all beliefs should be falsifiable. Rather, he used falsifiability as the criterion of demarcation between science and non-science, while denying that all beliefs should be scientific. His contention that falsifiability demarcates science does imply, as he recognized, that the criterion of falsifiability is not itself a scientific hypothesis. Rational beliefs are not necessarily scientific beliefs. Mathematics is rational without being falsifiable. The same is true of philosophical beliefs, such as the belief that scientific beliefs are falsifiable. But rational beliefs that are not scientific must be refutable, and falsifiable beliefs are a proper subset of refutable beliefs. Falsifiable beliefs are refutable in one particular way: they are refutable by observation statements, which I think are equivalent to EY's anticipations. Science is special because it is 1) empirical (unlike mathematics) and 2) has an unusual capacity to grow human knowledge systematically (unlike philosophy). But that does not imply that we can make do with scientific beliefs exclusively, one reason being the one that you mention about criteria for the acceptance of scientific theories. The broader criterion of refutability doesn't necessarily involve refutation by observation statements. How would you refute the falsifiability criterion? It would be false if science it were the case that scientists secured the advance of science by using some other criteria (such as verification). It's a mistake to conflate the questions of whether a theory is scientific and whether it's corroborated (by attempted falsifications). Or to conflate whether it's scientific or it's rationally believable. Theories aren't bad because they aren't science. They're bad because they're set up
0Ab3
Thank you for your thoughts. What are the criteria that we use for accepting or refuting rational non-empirical beliefs? You mention that falsifiability would be refuted if some other criteria “secured the advance of science.” You also mention that we should give up the refutability criterion if “sheer dogmatism conduces to the growth of knowledge.” It sounds like our criteria for the refutability of non-empirical beliefs are mostly practical; we accept the epistemic assumptions that make things “work best.” Is there more to it than this?
1[anonymous]
To be pedantic and Popperian, I'd have to correct your use of "empirical beliefs." The philosophical positions at issue aren't scientific but they are empirical. "Empirical"--to be the basis for scientific observation statements-- must be expressible in low-level observation sentences that all competent scientists agree on. The belief in question is that science's crucial distinguishing feature allowing it to advance is the subjection of science's claims to empirical testing, allowing strict falsification. We can't run an experiment or otherwise record observation statements, so we resort to philosophical debate aimed at refutation. Refutation is obtained by plausible argument. For instance, in the discussion about demarcation, an example of a potentially plausible argument goes if we relied on falsification exclusively, we would never have evidence that a claim is true, only that it isn't false. But we rely on scientific theories and consider them close to the truth (or at least as probably so). Therefore, falsifiability can't explain the distinctiveness of science. This involves highly plausible claims, based on observation, about how we in fact use scientific theories. But although the result of observation, it can't be reduced to something everyone agrees on that is closely tied to direct perception, as with an observation statement.

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some... (read more)

Suppose someone, on inspecting his own beliefs to date, discovers a certain sense of underlying structure; for instance, one may observe a recurring theme of evolutionary logic. Then while deciding on a new set of beliefs, would it not be considered reasonable for him to anticipate and test for similar structure, just as he would use other 'external' evidence? Here, we are not dealing with direct experience, so much as the mere belief of an experience of coherence within one's thoughts.. which may be an illusion, for all we know. But then again, assuming that the existing thoughts came from previous 'external' evidence, could one say that the anticipated structure is indeed well-rooted in experience already?

I was reading those 'what good is math?' and 'what good is music' comments. You can determine what if any 'system' is good or bad based on the understanding or misunderstanding of the variables involved.

i.e: one does not have any use for math if they do not understand any of the vast variables associated with the concepts of math. Math cannot be any good to this person who doesn't understand.

This principle applies to any 'system' whether it be math, music, love, life... etc.

If a belief turns deadbeat, evict it.

This might be challenging because our beliefs tend to shape the world we live in thus masking their error. Does anyone have any practical tips for discovering erroneous beliefs?

2Nectanebo
The post you replied to is helpful advice for doing just that. When what you specifically anticipate doesn't line up with what happens, that's discovering a possible erroneuos belief.
-1TheAncientGeek
If a belief encapsulates a value, if it's about how you want the world to be, why shouldn't it shape the world, and why should you evict in?
0ChristianKl
Making predictions about the world based on your beliefs and seeing whether those predictions hold true.

What about things I remember from long ago, which no one else remembers and for which I can find no present evidence or record of besides those memories themselves?

Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.

What if I had the belief that a certain coin was unfair, with a 51% chance of heads and only 49% chance of tails? Certainly I could observe an absurd amount of coin flips, and each bunch of them could nudge my belief -- but short of an infinite number of flips, none would "definitely" falsify it. Certainly in this case, I could come to believe with an... (read more)

-1tylerj
What caused you to believe a 51 % chance of heads versus 49 % chance of tails?

Another example of these types of questions: "If a man who cannot count finds a four-leaf clover, is he lucky?" (Stanisław Jerzy Lec)

Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian".

Suppose you, an invisible man, overheard 1,000,000 distinct individual humans proclaim "I believe that Velma Valedo and Wulky Wilkinsen are post-utopians based on several thorough readings of their complete bibliographies!"

Must there be some correspondence (probably an extremely complex connection) between the writings, and, quite possibly, between some of the 1,000,000 brains that believe this? The subjectivel... (read more)

What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn't advanced far enough to be able to tell if other species have floating beliefs or not.

Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.

Interesting post. However, I do not agree completely in the conclutions on the end.

I am a student in math science, what involves me into an enviroment of researchers of this area. In this way, I am able to see that this people's work is based on beliefs that 'does not exists', I mean, they work on abstract ideas that generally only exists in their minds. And now I wonder, does their efforts 'does not pay rent'? They live from structures and stuff that, in the most of the cases, cannot be found in 'real life', and so, according to the article's conclution, ... (read more)

3LawrenceC
You're definitely right that there's some areas where it's easier to make beliefs pay rent than others! I think there's two replies to your concern: 1) First, many theories from math DO pay rent (the ones I'm most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen's algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions. 2) Even things that can't pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.

If we extend the concept of making beliefs pay rent to structures in computer memory, then AIs could better choose which structures are more valuable than they cost when many objects are shared in an acyclic network. Each object at the bottom could cost 1, and any objects pointing at x equally share the cost of x plus 1 for themself. If beliefs are stored in these memory structures, then a belief would be evicted when its objective cost exceeds some measure of its value, and total value would be in units of memory available. When some beliefs are evicted, ... (read more)

This is enlightening.

Wulky Wilkinsen is a “post-utopian.” What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all.

I don't believe this is a good example. That information actually can change your anticipation.

By knowing that information you can expect the book will be set in a post-utopian world. By anticipating that you can maybe take better notice at the setting and how exactly the world is post-utopian.

But a great article nevertheless.

I dont get it.Any belief could be said to "pay rent" if you can conceive a situation where it will be useful later on.

A general situation that I made up was.

Given any belief X and at least 2 people believe X,I always have utility in believing X(I think it should be knowing) as it helps me predict the actions of the other 2 people that believe in X.

Even in the example where the student regurgitates it onto the upcoming quiz-the belief had utility for him as he could use that to improve his grades(constraining reality in a way he wants it to be).

I... (read more)

3jeronimo196
Just so. And a belief that leads to correct predictions will (generally) be more useful than a belief that doesn't. I think I see a confusion with the term "eviction" here. There is a difference between believing X exists (knowing about X) and believing X is true (believing X). So, "evicting X" should be understood as "no longer believing X", rather than "erazing all knowledge of X" (which happens involuntary anyway). I hope this was helpful, as this is my first comment, too. Anyway, I've lurked awhile and I don't think anyone here would begrudge you raising an honest question. P.S. Welcome to less wrong :) !!! Edit: formatting.

Yes! And another way to think about the arguments about beliefs that aren’t predicting anything is that they are really about definitions. When I listen to people talk and argue, I often find myself thinking “well, this depends on how you define X”. For example, is sound something that a living creature perceives, or is it vibrations in the air?

Why is 'constraining anticipation' the only acceptable form of rent?

What if a belief doesn't modify the predictions generated by the map, but it does reduce the computational complexity of moving around the map in our imaginations? It hasn't reduced anticipation in theory, but in practice it allows us to more cheaply collapse anticipation fields, because it lowers the computational complexity of reasoning about what to anticipate in a given scenario? I find concepts like the multiverse very useful here - you don't 'need' them to reduce your anticipation as... (read more)

Then what is the difference between belief and assumption in our mental maps.

What about imagination? Is that belief or assumption or in-congruent map of reality. 

Can imagination be part of mental processing without making us wrong about reality.

For instance, if I imagine that all buses in my city are blue, though they are red, can I then walk around with this model of reality in my head without a false belief? After all its just imagination?

Or is this model going to corrupt my thinking as I walk about thinking it, knowing full well its not true.

Furthe... (read more)

A couple of important limitations to the concept:

The concept assumes that beliefs should be tied to observable, testable phenomena. However, there are many important aspects of life and human experience (like emotions, subjective experiences, and certain philosophical or religious beliefs) that aren't easily observable or testable. The concept can be less applicable or useful in these areas. 

It also doesn't address truth value: The concept encourages beliefs to be tied to specific anticipations, but it doesn't necessarily address the truth value of th... (read more)

2Raemon
The post isn't meant to be an explanation for why beliefs exist, it's meant to highlight that by default, people have a bundle of things-that-feel-like beliefs that all seem to be a similar shape. But, if your goal is to figure out what's true and make good plans, it's very important to separate out which of your 'beliefs' are about predicting reality, and which are there for other reasons.

It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back

... (read more)

There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light
 


But indeed, I experience the floor directly; the experience of the floor is not limited to visual perception but also involves direct sensory inputs. The sensation caused by gravitational pull and the counter-pressure from the floor are experienced directly. Additionally, the sound produced when stepping on the floor and the anticipation of the fl... (read more)

continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

I guess that "paying rent" was not only a metaphor xD. But it's really a good advice to cut back on everything that lacks practical use, when you're on tight schedule/budget.