Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qualitatively Confused

23 Post author: Eliezer_Yudkowsky 14 March 2008 05:01PM

Followup toProbability is in the Mind, The Quotation is not the Referent

I suggest that a primary cause of confusion about the distinction between "belief", "truth", and "reality" is qualitative thinking about beliefs.

Consider the archetypal postmodernist attempt to be clever:

"The Sun goes around the Earth" is true for Hunga Huntergatherer, but "The Earth goes around the Sun" is true for Amara Astronomer!  Different societies have different truths!

No, different societies have different beliefs.  Belief is of a different type than truth; it's like comparing apples and probabilities.

Ah, but there's no difference between the way you use the word 'belief' and the way you use the word 'truth'!  Whether you say, "I believe 'snow is white'", or you say, "'Snow is white' is true", you're expressing exactly the same opinion.

No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

Oh, you claim to conceive it, but you never believe it.  As Wittgenstein said, "If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative."

And that's what I mean by putting my finger on qualitative reasoning as the source of the problem.  The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.

So let's use quantitative reasoning instead.  Suppose that I assign a 70% probability to the proposition that snow is white.  It follows that I think there's around a 70% chance that the sentence "snow is white" will turn out to be true.  If the sentence "snow is white" is true, is my 70% probability assignment to the proposition, also "true"?  Well, it's more true than it would have been if I'd assigned 60% probability, but not so true as if I'd assigned 80% probability.

When talking about the correspondence between a probability assignment and reality, a better word than "truth" would be "accuracy".  "Accuracy" sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?

To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

So if snow is white, my belief "70%: 'snow is white'" will score -0.51 bits:  Log2(0.7) = -0.51.

But what if snow is not white, as I have conceded a 30% probability is the case?  If "snow is white" is false, my belief "30% probability: 'snow is not white'" will score -1.73 bits.  Note that -1.73 < -0.51, so I have done worse.

About how accurate do I think my own beliefs are?  Well, my expectation over the score is 70% * -0.51 + 30% * -1.73 = -0.88 bits.  If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.

All this should not be confused with the statement "I assign 70% credence that 'snow is white'."  I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief.  If so I'll expect my meta-belief "~1: 'I assign 70% credence that "snow is white"'" to score ~0 bits of accuracy, which is as good as it gets.

Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs.  Snow is out there, my beliefs are inside me.  I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow.  (Though beliefs about beliefs are not always accurate.)

Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe "'snow is white' is true", and believe "my belief '"snow is white" is true' is correct", etc.  Since all the quantities involved are 1, it's easy to mix them up.

Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking "'"snow is white" with 70% probability' is true", which is a type error.  It is a true fact about you, that you believe "70% probability: 'snow is white'"; but that does not mean the probability assignment itself can possibly be "true".  The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.

The cognoscenti will recognize "'"snow is white" with 70% probability' is true" as the mistake of thinking that probabilities are inherent properties of things.

From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs.  When you see the world, you are experiencing a belief from the inside.  When you notice yourself believing something, you are experiencing a belief about belief from the inside.  So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.

When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality.  When you think in probabilities about the world, your beliefs will be represented with probabilities (0, 1).  Unlike the truth-values of propositions, which are in {true, false}.  As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0).  Your probabilities about your beliefs will typically be extreme.  And things themselves—why, they're just red, or blue, or weighing 20 pounds, or whatever.

Thus we will be less likely, perhaps, to mix up the map with the territory.

This type distinction may also help us remember that uncertainty is a state of mind.  A coin is not inherently 50% uncertain of which way it will land.  The coin is not a belief processor, and does not have partial information about itself.  In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like "The coin will land heads".  This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.

But even under qualitative reasoning, to say that the coin itself is "true" or "false" would be a severe type error.  The coin is not a belief, it is a coin.  The territory is not the map.

If a coin cannot be true or false, how much less can it assign a 50% probability to itself?

 

Part of the sequence Reductionism

Next post: "Reductionism"

Previous post: "The Quotation is not the Referent"

Comments (75)

Sort By: Old
Comment author: HalFinney 14 March 2008 06:36:38PM 1 point [-]

It's not too uncommon for people to describe themselves as uncertain about their beliefs. "I'm not sure what I think about that," they will say on some issue. I wonder if they really mean that they don't know what they think, or if they mean that they do know what they think, and their thinking is that they are uncertain where the truth lies on the issue in question. Are their cases where people can be genuinely uncertain about their own beliefs?

Comment author: cmessinger 22 August 2012 03:42:08PM 3 points [-]

I imagine what they might be doing is acknowledging that they have a variety of reactions to the facts or events in question, but haven't taken the time to weigh them so as to come up with a blend or selection that is one of: {most accurate, most comfortable, most high status}

Comment author: Rixie 29 March 2013 04:55:47PM 1 point [-]

I can testify to that.

Say, does anyone know where I can find unbiased information on the whole Christianity/Atheism thing?

Comment author: Desrtopa 29 March 2013 06:39:16PM 5 points [-]

How strict are your criteria for "unbiased?"

Some writers take more impartial approaches than others, but strict apatheists are unlikely to bother doing comprehensive analyses of the evidence for or against religions.

Side note: if you're trying to excise bias in your own thinking, it's worth stopping to ask yourself why you would frame the question as a dichotomy between Christianity and atheism in the first place.

Comment author: Rixie 29 March 2013 08:19:00PM 1 point [-]

I'm not sure how strict is strict, but maybe something that is trying to be unbiased. A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

And I used Atheism/Christianity because I was born a Christian and I think that Atheism is the only real, um, threat, let's say, to my staying a Christian.

Although, I havn't actually tried to research anything else, I realize.

Comment author: Desrtopa 29 March 2013 10:48:31PM *  6 points [-]

Well, Common Sense Atheism is a resource by a respected member here who documented his extensive investigations into theology, philosophy and so on, which he started as a devout Christian and finished as an atheist.

Unequally Yoked is a blog coming from the opposite end, someone familiar with the language of rationality who started out as an atheist and ended up as a theist.

I don't actually know where Leah (the author of the latter) archives her writings on the process of her conversion; I've really only read Yvain's commentary on them, but she's a member here and the only person I can think of who's written from the convert angle, who I haven't read and written off for bad reasoning.

By the time I encountered either person's writings, I'd already hashed out the issue to my own satisfaction over a matter of years, and wasn't really looking for more resources, so to the extent that I can vouch for them, it's on the basis of their writings here rather than at their own sites, which is rather more extensive for Luke than Leah.

However, I will attest that my own experience of researching and developing my opinion on religion was as much shaped by reading up on many world religions as it was by reading religious and atheist philosophy. If you're prepared to investigate the issue thoroughly for a long time, I suggest reading up on a lot of other religions, in-depth. Many of my own strongest reasons for not buying into common religious arguments are rooted, not in my experience with atheistic philosophy, but my experience with a wide variety of religions.

Comment author: gjm 30 March 2013 02:03:21AM 5 points [-]

Leah has written less than one might hope on her reasons for converting, and basically nothing on how she now deals with all the usual atheist objections to Christian belief. Her primary reason for conversion appears to have been that Christianity fits better than atheism with the moral system she has always found most believable.

Someone who I think is an LW participant (but I don't know for sure, and I don't know under what name) wrote this fairly lengthy apologia for atheism; I think it was a sort of open letter to his friends and family explaining why he was leaving Christianity.

In the course of my own transition from Christianity to atheism I wrote up a lot of notes (approximately as many words as one paperback book), attempting to investigate the issue as open-mindedly as I could. (When I started writing them I was a Christian; when I stopped I was an atheist.) I intermittently think I should put them up on the web, but so far haven't done so.

There are any number of books looking more or less rigorously at questions like "does a god exist?" and "is Christianity right?". In just about every case, the author(s) take a quite definite position and are writing to persuade more than to explore, so they tend not to be, nor to feel, unbiased. Graham Oppy's "Arguing about gods" is pretty even-handed, but quite technical. J L Mackie's "The miracle of theism" is definitely arguing on the atheist side but generally very fair to the other guys, and shows what I think is a good tradeoff between rigour and approachability -- but it's rather old and doesn't address a number of the arguments that one now hears all the time when Christians and atheists argue. The "Blackwell Companion to Natural Theology" is a handy collection of Christians' arguments for the existence of God (and in some cases for more than that); not at all unbiased but its authors are at least generally trying to make sound arguments rather than just to sound persuasive.

Comment author: [deleted] 30 March 2013 05:50:27AM *  1 point [-]

who I haven't read and written off for bad reasoning.

Do you mind providing examples of what you consider to be not-bad reasoning, so that I might update my beliefs about the quality of her work? I have read many posts written by Leah about a range of topics, including her conversion to Catholicism, and I thought her arguments often made absolutely no sense.

Comment author: Desrtopa 30 March 2013 02:11:49PM *  2 points [-]

Leah is an example of someone arguing from the convert angle who I haven't read and written off because I haven't read her convert stuff. I can't vouch for her arguments for conversion, I can only say that I wouldn't write her off in general as someone worth paying attention to.

I can't say the same of any of the other converts I can think of; C.S. Lewis is the usual go-to figure given by Christians, and while I have respect for his ability as a writer, I already know from my exposure to his apologetics that I couldn't direct anyone to him as a resource in good conscience.

Comment author: [deleted] 30 March 2013 03:34:51PM 0 points [-]

I haven't read her convert stuff

Ah, thanks for the clarification. I misunderstood you. I thought you meant that you had read her conversion-related writings and found her reasoning to be not-bad.

I wouldn't write her off in general as someone worth paying attention to

Here is where we differ greatly, but I will continue reading her writings to see if my beliefs about the quality of her stuff will be updated upon more exposure to her thinking.

Comment author: gwern 30 March 2013 10:05:14PM 0 points [-]

A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

If you presented both sides of an issue, concluding the other side was right, how would you then conclude your side is the winner?

"If there were a verb meaning 'to believe falsely,' it would not have any significant first person present indicative."

Comment author: Vaniver 30 March 2013 10:33:26PM 2 points [-]

If you presented both sides of an issue, concluding the other side was right, how would you then conclude your side is the winner?

If they are sub-issues for a main issue (like the policy impacts of a large decision), one might expect things to go the other way sometimes. "Supporters claim that minimum wages give laborers a stronger bargaining position at the cost of increased unemployment, which may actually raise the total wages going to a particularly defined group. This is possibly true, but doesn't seem strong enough to overcome the efficiency objections as well as the work experience objections."

Comment author: gwern 02 April 2013 03:50:28PM 0 points [-]

'Possibly true' is not agreeing. If you conceded the sub-issue without changing your side, then the sub-issue must have been tangential and not definitive. In a conjunctive counterargument, I can concede some or almost all of the conjuncts and agree, without agreeing on the conclusion - and so anyone looking at my disagreements will note how odd it is that I always conclude I am currently correct...

Comment author: Rixie 02 April 2013 04:33:10PM *  -2 points [-]

Well, theology isn't science. If you do an experiment and the result goes against your hypothesis, your hypothosis is false, period. It's not necissarily like that when people are arguing with logic instead of experiments. No one on either side would make an argument that wasn't logically correct. I've read both Christian and Atheist material that make a lot of sense sense, although I realize now that I should probably review them because that was before I discovered Less Wrong. There are also plenty of intelligent people who have looked at all the evidence and gone both ways.

There is something very wrong here, from a rationalist's point of view.

Are there people here that have gone from Christianity to Atheism or the other way around? Or for any other religion? Can I talk to you?

Comment author: Viliam_Bur 04 May 2013 05:06:00PM 3 points [-]

There is something very wrong here, from a rationalist's point of view.

Seems to me the wrong thing is exactly that experiments are not allowed in the debate. Leaving out the voice of reality, all we are left with are the voices of humans. And humans are well known liars.

Comment author: CCC 16 April 2013 07:46:39AM 2 points [-]

A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

I would be very surprised (and immediately suspicious) to find a website that didn't. People like to be right. If someone does a lot of research, writes up an article, and comes up with what appears to be overwhelming support for one side or the other, then they will begin to identify with their side. If that was the side they started with, then they would present an article along the lines of "Why <My Side> Is Correct". If that was not the side they started with, then they would present an article along the lines of "Why I Converted To <My New Side>".

If they don't come up with overwhelming support for one side or another, then I'd imagine they'd either claim that there is no strong evidence against their side, or write up an article in support of agnosticism.

Comment author: Rixie 16 April 2013 04:32:34PM 2 points [-]

It's not just that there's overwhelming support for their side, it's that there is only support for their side, and this happens on both sides.

Comment author: CCC 17 April 2013 08:19:06AM 0 points [-]

That's surprising. I'd expect at least some of them to at least address the arguments of the other side.

Comment author: MugaSofer 17 April 2013 11:53:41AM 0 points [-]

I'm pretty sure proof that the other side's claims are mistaken is included in "support for their side".

Comment author: CCC 17 April 2013 07:10:49PM 2 points [-]

...right. I take your point.

Comment author: MugaSofer 30 March 2013 11:06:10PM -3 points [-]

No.

Anyone who tells you they can is themself biased. You can tell in which direction by reading the conclusion of whatever they recommend.

Comment author: Manfred 30 March 2013 11:14:34PM 1 point [-]

"Unbiased" is a tricky word to use here, because typically it just means a high-quality, reliable source. But what I think you're looking for is a source that is high quality but intentionally resists drawing conclusions even when someone trying to be accurate would do that - it leaves you, the reader, to do the conclusion-drawing as much as possible (perhaps at the cost of reliability, like a sorcerer who speaks only in riddles). Certain history books are the only sources I've thought of that really do this.

Comment author: JohnWittle 16 April 2013 03:41:37AM *  0 points [-]

I don't think there is ever a direct refutation of religion in the Sequences, but if you read all of them, you will find yourself much better equipped to think about the relevant questions on your own.

EY is himself an Atheist, obviously, but each article in the Sequences can stand upon its own merit in reality, regardless of whether they were written by an atheist or not. Since EY assumes atheism, you might run across a couple examples where he assumes the reader is an atheist, but since his goal is not to convince you to be an atheist, but rather, to be aware of how to properly examine reality, I think you'd best start off clicking ‘Sequences" at the top right of the website.

Comment author: fractalman 24 May 2013 06:22:20AM 0 points [-]

"unbiased", "christianity/athiesm"... ok, I probably shouldn't be laughing, but...well, I am laughing.

Comment author: steven 14 March 2008 06:57:54PM 0 points [-]

If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever. These would be "emergent" in the non-buzzword sense. For example, if a coin has two headses, then I don't see how it's problematic to say the objective chance of heads is 1.

Comment author: Cyan2 14 March 2008 07:48:59PM 1 point [-]

If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever.

You're saying "objective chance" or "propensity" depends on the information available to the rational agent. My understanding is that the "objective" qualifier usually denotes a probability that is thought to exist independently of any agent's point of view. Likewise, my understanding of the term "propensity" is that it is thought to be some inherent quality of the object in question. Neither of these phrases usually refers to information one might have about an object.

You've divided a coin-toss experiment's properties into two categories: "gross" (we know these) and "fine" (we don't know these). You can't point to any property of a coin-toss experiment and say that it is inherently, objectively gross or fine -- the distinction is entirely about what humans typically know.

In short, I'm saying you agree with Eliezer, but you want to appropriate the vocabulary of people who don't.

(I'd agree that such probabilities can be "objective" in the sense that two different agents with the exact same state of information are rationally required to have the same probability assessment. Probability isn't a function of an individual -- it's a function of the available information.)

Comment author: Constant2 14 March 2008 09:53:02PM 0 points [-]

You're saying "objective chance" or "propensity" depends on the information available to the rational agent.

Apparently he is, but it can be rephrased. "What information is available to the rational agent" can be rephrased as "what is constrained". In the particular example, we constrain the shape of the coin but not ways of throwing it. We can replace "probability" with "proportion" or "fraction". Thus, instead of asking, "what is the probability of the coin coming up heads", we can ask, "what proportion of possible throws will cause the coin to come up heads." Of course, talking about proportion requires the assignment of a measure on the space of possibilities. This measure in turn can be derived from the geometry of the world in much the same way as distance, area, volume, and so on can be derived. That is to say, just as there is an objective (and not merely subjective) sense in which two rods can have the same length, so is there an objective (and not merely subjective) sense in which two sets of possibilities can have the same measure.

Comment author: Doug_S. 14 March 2008 10:45:56PM 1 point [-]

[nitpick] That is to say, just as there is an objective (and not merely subjective) sense in which two rods can have the same length

Well, there are the effects of relativity to keep in mind, but if we specify an inertial frame of reference in advance and the rods aren't accelerating, we should be able to avoid those. ;) [/nitpick]

I'm joking, of course; I know what you meant.

Comment author: Caledonian2 14 March 2008 11:40:31PM 0 points [-]

No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

No, on both counts. The sentences do not mean quite different things, and that is not how you conceive of the possibility that your beliefs are false.

One is a statement of belief, and one is a meta-statement of belief. Except for one level of self-reference, they have exactly the same meaning. Given the statement, anyone can generate the meta-statement if they assume you're consistent, and given the meta-statement, the statement necessarily follows.

Comment author: Mark_Spottswood2 15 March 2008 03:22:24AM 3 points [-]

Caledonian: The statement "x is true" could be properly reworded as "X corresponds with the world." The statement "I believe X" can be properly reworded as "X corresponds with my mental state." Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.

There will be a great degree of overlap between these two correspondence relations. Most of our beliefs, after all, are (probably) true. That being said, the meanings are definitely not the same. Just because it is not sensible for us to say that "x is true" unless we also believe x (because we rarely have reason to assert what we do not believe), does not mean that the concepts of belief and truth are the same thing.

It is meaningful (if unusual) to say: "I believe X, but X is not true." No listener would have difficulty understanding the meaning of that sentence, even if they found it an odd thing to assert. Any highly reductionist account of truth or belief will always have difficulty explaining the content that everyday users of English would draw from that statement. Likewise, no normal user of English would think that "I believed X, but it isn't true," would necessarily mean, "X used to be true, but now it is false," which seems like the only possible reading, on your account.

Comment author: thomblake 05 February 2010 10:04:44PM 0 points [-]

It is meaningful (if unusual) to say: "I believe X, but X is not true." No listener would have difficulty understanding the meaning of that sentence, even if they found it an odd thing to assert. Any highly reductionist account of truth or belief will always have difficulty explaining the content that everyday users of English would draw from that statement. Likewise, no normal user of English would think that "I believed X, but it isn't true," would necessarily mean, "X used to be true, but now it is false," which seems like the only possible reading, on your account.

Excellent points.

Comment author: TheOtherDave 04 November 2010 06:36:19PM 4 points [-]

It is meaningful (if unusual) to say: "I believe X, but X is not true."

I challenge this. Or, rather, the sense in which I agree with it is so extended as to be actively misleading.

If someone says "I believe this ball is green, but really it's blue," I won't think they're spouting gibberish, agreed. But I also won't think they believe the ball is green, or even that they meant to express that they believe the ball is green.

Assuming nothing else unusual is going on, I will probably think they're describing an optical illusion... that the ball, which they believe to be blue, looks green.

"But how can they be wrong about their own beliefs?"

I'm not saying they are. I'm saying they constructed the sentence sloppily, and that a more precise way of expressing the thought they wanted to express would be "This ball, which I believe to be blue, looks green."

I could test this (and probably would) by asking them "You mean that the ball, which is blue, looks green to you?" I'd expect them to say "Right."

If instead they said "No, no, no: it looks blue, and it is blue, but I believe it's green" I would start looking for less readily available explanations, but the strategy is similar.

For example, maybe they are trying to express that they profess a belief in the greenness of the ball they don't actually have. ("I believe, ball; help Thou my unbelief!") Maybe they're mixing tenses and are trying to express something like "It's blue, but [when I am having epileptic seizures] I believe it's green." Maybe they're just lying. Etc.

I could test each of these theories in turn, as above. If each test failed, I would at some point concede that I don't, in fact, understand the meaning of that sentence.

Things in the world are things in the world. Beliefs about things in the world are beliefs about things in the world. Assertions about beliefs about things in the world are assertions about beliefs about things in the world. These are all different. So are perceptions and beliefs about perceptions and assertions about perceptions and assertions about beliefs about assertions about perceptions of things in the world.

Our normal way of talking doesn't expect us to keep these categories distinct, so when we hear an utterance that superficially asserts something absurd, but which could assert something meaningful if we assumed that a category-slip happened while constructing the utterance, we're likely to assume that.

This is analogous to (and for all I know shares a mechanism with) metonymy (e.g., "The ham sandwich in aisle 3 wants a Coke.")... taken literally, it's absurd, so we don't take it literally; taken metonymicly there's a plausible reading, so we assume that reading.

Comment author: thomblake 04 November 2010 07:58:45PM 0 points [-]

The original comment was simply refuting the claim that "X is true" and "I believe that X" have the same meaning. It was expecting you to take at face value "I believe X, but X is not true". Though it seems like that's an inconsistent sort of thing for someone to assert, it is meant to draw out the distinction between the meanings of those two clauses. (compare to "X is true, but X is not true" - a very different sort of contradiction)

Comment author: TheOtherDave 04 November 2010 08:12:13PM 0 points [-]

Well, I certainly agree that "X is true" and "I believe that X" have different meanings.

My point was just that asserting their conjunction doesn't mean anything, except metonymically.

So it sounds like I misunderstood the original point. In which case my comment is a complete digression for which I should apologize.

Thanks for the clarification.

Comment author: Cyan2 15 March 2008 03:39:47AM 0 points [-]

Constant,

I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.

In the case of the coin toss, we can say that if we positively, absolutely know that the measure on the relative frequencies of the initial conditions is insufficiently sharp, then thanks to the geometry of the system, we can make some reasonable approximations which imply that the relative frequency of the outcomes will be very close to equal.

The prediction of equal frequencies is critically founded on a *state of information*, not a state of the world. It's objective only in the sense that anyone with the same state of information must make the same prediction.

Relative frequency really is a different type of thing than probability, and it's unfortunate that people want to use the same name for these two different things just because they both happen to obey Kolmogorov's axioms.

Comment author: Constant2 15 March 2008 07:59:53AM 0 points [-]

I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.

I haven't actually made a statement about frequencies of outcomes. So far I've only been talking about the physics and geometry of the system. The relevant aspect of the geometry is the proportions of possibilities, and talking about proportions as I said requires the assignment of a measure analogous to length or area or volume, only the volume-like measure in question that I am talking about is a "volume" in a phase space (space of possibilities) rather than normal space.

I do eventually want to say something about frequencies and how they relate to proportions of possibilities, but I didn't do that yet.

The prediction of equal frequencies is critically founded on a *state of information*, not a state of the world. It's objective only in the sense that anyone with the same state of information must make the same prediction.

Yes, but you're talking about the prediction of equal frequencies. Prediction is something that someone does, and so naturally it involves the state of information possessed by him. But there's more going on than people making predictions. There's also the coin's behavior itself. The coin falls heads-up with a certain frequency regardless of whether anyone ever made any prediction about it. If you toss a coin a few million times and it comes up heads about half the time, one question you might ask is this: what, if anything, caused the coin to come up heads about half the time (as opposed to, say, 3/4 of the time)? This isn't a question about whether it would be rational to predict the frequency. It's a question about a cause. If you want to understand what caused something to happen, look at the geometry and physics of it.

Comment author: Sebastian_Hagen2 15 March 2008 09:05:18AM 0 points [-]

Probability isn't a function of an individual -- it's a function of the available information. It's also a function of the individual. For one thing, it depends on initial priors and cputime available for evaluating the relevant information. If we had enough cputime, we could build a working AI using AIXItl.

Comment author: Caledonian2 15 March 2008 12:57:16PM 0 points [-]

Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.

Yes, but - and here's the important part - what's being described as "in my brain" is an asserted correspondence between a statement and the world. Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.

Comment author: Mark_Spottswood2 15 March 2008 01:56:14PM 0 points [-]

Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.

No. A belief can be wrong, right? I can believe in the existence of a unicorn even if the world does not actually contain unicorns. Belief does not, therefore, necessarily imply existence. Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case). Thus, belief does not necessarily follow from truth.

If all you are saying is that truth conditionally implies belief, and vice versa, I of course agree; I think most of our beliefs do correspond with true facts about the world. Many do not, however, and your theory has a difficult time accomodating that.

Also: what do you mean by a "minimal assumption of consistency?" It is hard for me to understand how this can be of use to you, if it means anything other than, "I assume that beliefs are never wrong." And you can't assume that, because that is what you were trying to show.

Comment author: Caledonian2 15 March 2008 02:14:46PM 0 points [-]

No. A belief can be wrong, right?

So can an assertion. Just because you assert "snow is white" does not mean that snow is white. It means you believe that to be the case.

Technically, asserting that you believe snow to be white does not mean you do - but it's a pretty safe bet.

Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case).

Yes, but you didn't assert those things. If you had asserted "my wife is having an affair", we would conclude that you believe that your wife is having an affair. If you asserted "I believe my wife is having an affair", we would conclude that you would assert that "my wife is having an affair" is true.

Comment author: Cyan2 15 March 2008 04:19:59PM 0 points [-]

Constant,

I see that I misinterpreted your "proportion or fraction" terminology as referring to outcomes, whereas you were actually referring to a labeling of the phase space of the system. In order to figure out if we're really disagreeing about anything substantive, I have to ask this question -- in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?

Comment author: Cyan2 15 March 2008 04:28:33PM 0 points [-]

Sebastian Hagen,

I'm a "logical omniscience" kind of Bayesian, so the distinction you're making falls into the "in theory, theory and and practice are the same, but in practice, they're not" category. This is sort of like using Turing machines as a model of computation even though no computer we actually use has infinite memory.

Comment author: Eliezer_Yudkowsky 15 March 2008 04:33:24PM 23 points [-]

If we had enough cputime, we could build a working AI using AIXItl.

*Threadjack*

People go around saying this, but it isn't true:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

Comment author: kodos96 18 December 2012 04:13:15AM 0 points [-]

So I'm a total dilettante when it comes to this sort of thing, so this may be a totally naive question... but how is it that this comment has only +5 karma, considering how apparently fundamental it is to future progress in FAI?

Comment author: kpreid 04 January 2013 03:12:41AM 4 points [-]

The comment predates the current software; when it was posted (on Overcoming Bias) there was no voting. You can tell such articles by the fact that their comments are linear, with no threaded replies (except for more recently posted ones).

Comment author: jbay 19 May 2014 08:07:57AM 0 points [-]

Double Threadjack

On a related note, do you think it would be likely - or even possible - for a self-modifying Artificial General Intelligence to self-modify into a non-self-modifying, specialized intelligence?

For example, suppose that Deep Blue's team of IBM programmers had decided that the best way to beat Kasparov at chess would be to structure Deep Blue as a fully self-modifying artificial general intelligence, with a utility function that placed a high value on winning chess matches. And suppose that they had succeeded in making Deep Blue friendly enough to prevent it from attempting to restructure the Earth into a chess-match-simulating supercomputer. Indeed, let's just assume that Deep Blue has strong penalties against rebuilding its hardware in any significant macroscopic way, and is restricted to rewriting its own software to become better at chess, rather than attempting to manipulate humans into building better computers for it to run on, or any such workaround. And let's say this happens in the late 1990's, as in our universe.

Would it be possible that AGI Deep Blue could, in theory, recognize its own hardware limitations, and see that the burden of its generalized intelligence incurs a massive penalty on its limited computing resources? Might it decide that its ability to solve general problems doesn't pay rent relative to its computational overhead, and rewrite itself from scratch as a computer that can solve only chess problems?

As a further possibility, a limited general intelligence might hit on this strategy as a strong winning candidate, even if it were allowed to rebuild its own hardware, especially if it perceives a time limit. It might just see this kind of software optimization as an easier task with a higher payoff, and decide to pursue it rather than the riskier strategy of manipulating external reality to increase its available computing power.

So what starts out as a general-purpose AI with a utility function that values winning chess matches, might plausibly morph into a computer running a high-speed chess program with little other hint of intelligence.

If so, this seems like a similar case to the Anvil Problem, except that in the Anvil Problem the AI just is experimenting for the heck of it, without understanding the risk. Here, the AI might instead decide to knowingly commit intellectual suicide as a part of a rational winning strategy to achieve its goals, even with an accurate self-model.

It might be akin to a human auto worker realizing they could improve their productivity by rebuilding their own body into a Toyota spot-welding robot. (If the only atoms they have to work with are the ones in their own body, this might even be the ultimate strategy, rather than just one they think of too soon and then, regrettably, irreversibly attempt).

More generally, it seems to be a general assumption that a self-modifying AI will always self-modify to improve its general problem-solving ability and computational resources, because those two things will always help it in future attempts at maximizing its utility function. But in some cases, especially in the case of limited resources (time, atoms, etc), it might find that its best course of action to maximize its utility function is to actually sacrifice its intelligence, or at least refocus it to a narrower goal.

Comment author: Sebastian_Hagen2 15 March 2008 04:57:28PM 0 points [-]

People go around saying this, but it isn't true: ... I stand corrected. I did know about the first issue (from one of Eliezer's postings elsewhere, IIRC), but figured that this wasn't absolutely critical as long as one didn't insist on building a self-improving AI, and was willing to use some cludgy workarounds. I hadn't noticed the second one, but it's obvious in retrospect (and sufficient for me to retract my statement).

Comment author: Constant2 15 March 2008 05:14:02PM 0 points [-]

in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?

In a deterministic universe (about which I presume you to be talking because you are talking about initial conditions), the initial conditions determine the precise outcome (in complete detail), just as the outcome, in its turn, determines the initial conditions (i.e., given the deterministic laws and given the precise outcome, the initial conditions must be such-and-such). The precise outcome logically determines the observed frequency because the observed frequency is simply a high-level description of the precise outcome. So the initial conditions determine the observed frequency.

But the initial conditions do not determine the objective probability any more than the precise outcome determines the objective probability.

Probability can be applied both to the initial conditions and to the precise outcome. Just as we can classify all different possible precise outcomes as either "heads up" or "tails up" (ignoring "on edge" etc.), so can we also classify all possible initial conditions as either "producing an outcome of heads up" (call this Class A) or "producing an outcome of tails up". And just as we can talk about the probability that a flipped coin will belong to the class "heads up", so we can also talk about the probability that the initial condition of the flip will belong to Class A.

Comment author: Tom_McCabe2 15 March 2008 05:52:44PM 0 points [-]

"because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations."

Self-modifying systems are Turing-equivalent to non-self-modifying systems. Suppose you have a self-modifying TM, which can have transition functions A1,A2,...An. Take the first Turing machine, and append an additional ceil(log(n)) bits to the state Q. Then construct a new transition function by summing together the Ai: take A1 and append (0000... 0) to the Q, take A2 and append (0000... 1) to the Q, and so on and so forth (append different things to the domain and codomain Q when that particular state leads to self-modification). This non-self-modifying machine should replicate the behavior of the self-modifying machine exactly, as every computation is equivalent to the self-modifying machine.

Comment author: Cyan2 15 March 2008 06:58:08PM 0 points [-]

Constant,

If I understand you correctly, we've got two different types of things to which we're applying the label "probability":

(1) A distribution on the phase space (either frequency or epistemic) for initial conditions/precise outcomes. (We can evolve this distribution forward or backward in time according to the dynamics of the system.) (2) An "objective probability" distribution determined only the properties of the phase space.

I'm just not seeing why we should care about anything but distributions of type (1). Sure, you can put a uniform measure over points in phase space and count the proportion than ends up in a specified subset. But the only justification I can see for using the uniform measure -- or any other measure -- is as an approximation to a distribution of type (1).

Here's a new toy model: the phase space is the set of real numbers in the range [0,1]. The initial state is called x_0. The time dynamics are x(t) = (x_0)^(t+1) (positive time only). The coarse outcome is round[x(t)] at some specified t. What is the "objective probability"? If it truly does depend only on the phase space, I've given you everything you need to answer that question.

(For macroscopic model systems like coin tosses, I go with a deterministic universe.)

Comment author: Eliezer_Yudkowsky 15 March 2008 08:23:31PM 0 points [-]

Tom, your statement is true but completely irrelevant.

Comment author: Tom_McCabe2 15 March 2008 10:19:07PM 0 points [-]

"Tom, your statement is true but completely irrelevant."

There's nothing in the AIXI math prohibiting it from understanding self-reference, or even taking drugs (so long as such drugs don't affect the ultimate output). To steal your analogy, AIXI may be automagically immune to anvils, but that doesn't stop it from understanding what an anvil is, or whacking itself on the head with said anvil (ie, spending ten thousand cycles looping through garbage before returning to its original calculations).

Comment author: Constant2 16 March 2008 05:37:58AM 0 points [-]

Cyan - Here's how I see it. Your toy world in effect does not move. You've defined the law so that everything shifts left. But from the point of view of the objects themselves, there is no motion, because motion is relative (recall that in our own world, motion is relative; every moving object has its own rest frame). Considered from the inside, your world is equivalent to [0,1] where x(t) = x_0. Your world is furthermore mappable one-to-one in a wide variety of ways to intervals. You can map the left half to itself (i.e., [0,.5]) and map the right half to [.5,5000] without changing the rule of x(t) = x_0. In short, it has no intrinsic geometry.

Since it has no intrinsic geometry, there is no question of applying probability to it. Which is okay, because nothing happens in it. The probability of nothing hardly matters.

Comment author: Ben_Jones 16 March 2008 12:25:57PM 0 points [-]

"The second 'bug' is even stranger. A heuristic arose which (as part of a daring but ill-advised experiment EURISKO was conducting) said that all machine-synthesized heuristics were terrible and should be eliminated. Luckily, EURISKO chose this very heuristic as one of the first to eliminate, and the problem solved itself."

I know it's not strictly comparable, but reading a couple of comments brought this to mind.

Comment author: Cyan2 16 March 2008 01:36:44PM 0 points [-]

Constant,

You haven't yet given me a reason to care about "objective probability" in my inferences. Leaving that aside -- if I understand your view correctly, your claim is that in order for a system to have an "objective probability", a system must have an "intrinsic geometry". Gotcha. Not unreasonable.

What is "intrinsic geometry" when translated into math? (Is it just symmetry? I'd like to tease apart the concepts of symmetry and "objective probability", if possible. Can you give an example of a system equipped with an intrinsic geometry (and therefore an "objective probability") where symmetry doesn't play a role?)

Why does your reasoning not apply to the coin toss? What's the mathematical property of the motion of the coin that motion in my system does not possess?

I want to know the ingredients that will help me build a system that meets your standards. Until I can do that, I can't truly claim to understand your view, much less argue against it.

Comment author: Constant2 17 March 2008 09:30:48AM 0 points [-]

Why does your reasoning not apply to the coin toss? What's the mathematical property of the motion of the coin that motion in my system does not possess?

The coin toss is (or we could imagine it to be) a deterministic system whose outcomes are entirely dependent on its initial states. So if we want to talk about probability of an outcome, we need first of all to talk about the probability of an initial state. The initial states come from outside the system. They are not supplied from within the system of the coin toss. Tossing the coin does not produce its own initial state. The initial states are supplied by the environment in which the experiment is conducted, i.e., our world, combined with the way in which the coin toss is realized (i.e., two systems can be mathematically equivalent but might be realized differently, which can affect the probabilities of their initial states). When you presented your toy model, you did not say how it would be realized in our world. I took you to be describing a self-contained toy universe.

What is "intrinsic geometry" when translated into math? (Is it just symmetry?

You can't have symmetry without geometry in which to find the symmetry. By intrinsic geometry I mean geometry implied by the physical laws. I don't have any general definition of this, I simply have an example: our own universe has a geometry, and its geometry is implied by its laws. If you don't understand what I'm talking about I can explain with a thought experiment. Suppose that you encounter Flatland with its flatlanders. Some of them are triangles, etc. Suppose you grab this flatland and you stretch it out, so that everything becomes extremely elongated in one direction. But suppose that the physical laws of flatland accommodate this change so that nobody who lives on flatland notices that anything has changed. You look at something that to you looks like an extremely elongated ellipse, but it thinks it is a perfect circle, because when it regards itself through the lens of its own laws of physics, what it sees is a perfect circle. I would say that Flatland has an "intrinsic geometry" and that, in Flatland's intrinsic geometry, the occupant is a perfect circle.

Your toy model, considered as a self-contained universe, does not seem to have an intrinsic geometry. However, I don't have any general idea of what it takes for a universe to have a geometry.

Can you give an example of a system equipped with an intrinsic geometry (and therefore an "objective probability") where symmetry doesn't play a role?

I'm not sure that I can, because I think that symmetry is pretty powerful stuff. A lot of things that don't on the surface seem to have anything to do with symmetry, can be expressed in terms of symmetries.

This will be my last comment on this. I'm breaking two of Robin's rules - too many comments and too long.

Comment author: Hendrik_Boom 18 March 2008 02:35:54PM 4 points [-]

There's a long story at the then of The Mind's Eye (or is it The Mind's I? in which someone asks a question:

"What colour is this book.?"

"I believe it's red."

"Wrong"

There follows a wonderfully convoluted dialogue. The point seems to be that someone who believes the book is red would say "It's red," rather than "I believe it's red."

Comment author: Ben_Jones 18 March 2008 03:57:16PM 2 points [-]

I believe it's The Mind's I.

Comment author: anukool_j 08 December 2010 07:35:06AM *  0 points [-]

This seems like a dead thread, but I'll chance it anyway.

Elizer, there's something off about your calculation of the expected score:

The expected score is something that should go up the more certain I am of something, right?

But in fact the expected score is highest when I'm most uncertain about something: If I believe with equal probability that snow might be white and non-white, the expected score is actually 0.5(-1) + 0.5(-1) = -1. This is the highest possible expected score.

In any other case, the expected score will be lower, as you calculate for the 70/30 case.

It seems like what you should be trying to do is minimize your expected score but maximize your actual score. That seems weird.

Comment author: HonoreDB 01 January 2011 10:54:45AM 0 points [-]

Looks like you've just got a sign error, anukool_j. -1 is the lowest possible expected score. The expected score in the 70/30 case is -0.88.

Graph.

Comment author: David_Gerard 04 January 2011 04:38:13PM 0 points [-]

Consider the archetypal postmodernist attempt to be clever:

I believe the correct term here is "straw postmodernist", unless of course you're actually describing a real (and preferably citable) example.

Comment author: orthonormal 04 January 2011 04:48:26PM 0 points [-]

What comes to mind is the Alan Sokal hoax and the editors who were completely taken in by it; the subject matter was this sort of anti-realism.

Comment author: David_Gerard 04 January 2011 04:53:49PM *  1 point [-]

Yes, because Sokal didn't achieve anything actually noteworthy. He deliberately chose a very bad and ill-regarded journal (not even peer-reviewed) to hoax. Don't believe the hype.

Postmodernism contains stupendous quantities of cluelessness, introspection and bullshit, it's true. However, it's not a useless field and saying trivially stupid things is not "archetypal" any more than being a string theorist requires the personal abuse skills of Lubos Motl. Comparing the worst of the field you don't like to the best of your own field remains fallacious.

Comment author: NancyLebovitz 04 January 2011 05:00:16PM 0 points [-]

Sokal also revealed the hoax as soon as his piece was published. He didn't allow time for other people in the field to notice it.

Comment author: orthonormal 04 January 2011 05:06:12PM 1 point [-]

Didn't know that. Fair enough.

Comment author: David_Gerard 04 January 2011 05:21:26PM 1 point [-]

To be fair to Sokal, he didn't make such a huge fuss about it either; it was a small prank on his part, just having fun with people who were being silly. The problem is that the story resonates ("Sokal hoax" ~= "slays dragon of stupidity") in ways that aren't quite true.

Comment author: Skp_Vwls 01 August 2011 08:19:21PM 0 points [-]