It's not too uncommon for people to describe themselves as uncertain about their beliefs. "I'm not sure what I think about that," they will say on some issue. I wonder if they really mean that they don't know what they think, or if they mean that they do know what they think, and their thinking is that they are uncertain where the truth lies on the issue in question. Are their cases where people can be genuinely uncertain about their own beliefs?
Well, Common Sense Atheism is a resource by a respected member here who documented his extensive investigations into theology, philosophy and so on, which he started as a devout Christian and finished as an atheist.
Unequally Yoked is a blog coming from the opposite end, someone familiar with the language of rationality who started out as an atheist and ended up as a theist.
I don't actually know where Leah (the author of the latter) archives her writings on the process of her conversion; I've really only read Yvain's commentary on them, but she's a member here and the only person I can think of who's written from the convert angle, who I haven't read and written off for bad reasoning.
By the time I encountered either person's writings, I'd already hashed out the issue to my own satisfaction over a matter of years, and wasn't really looking for more resources, so to the extent that I can vouch for them, it's on the basis of their writings here rather than at their own sites, which is rather more extensive for Luke than Leah.
However, I will attest that my own experience of researching and developing my opinion on religion was as much shaped by reading up on many world religions as it...
If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever. These would be "emergent" in the non-buzzword sense. For example, if a coin has two headses, then I don't see how it's problematic to say the objective chance of heads is 1.
If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever.
You're saying "objective chance" or "propensity" depends on the information available to the rational agent. My understanding is that the "objective" qualifier usually denotes a...
You're saying "objective chance" or "propensity" depends on the information available to the rational agent.
Apparently he is, but it can be rephrased. "What information is available to the rational agent" can be rephrased as "what is constrained". In the particular example, we constrain the shape of the coin but not ways of throwing it. We can replace "probability" with "proportion" or "fraction". Thus, instead of asking, "what is the probability of the coin coming up heads", w...
[nitpick] That is to say, just as there is an objective (and not merely subjective) sense in which two rods can have the same length
Well, there are the effects of relativity to keep in mind, but if we specify an inertial frame of reference in advance and the rods aren't accelerating, we should be able to avoid those. ;) [/nitpick]
I'm joking, of course; I know what you meant.
No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.
No, on both counts. The sentences do not mean quite different things, and that is not how you conceive of the possibility that your beliefs are false.
One is a statement of belief, and one is a meta-statement of belief. Except for one level of self-reference, they have exactly the same meaning. Given the statement, anyone can generate the meta-statement if they assume you're consistent, and given the meta-statement, the statement necessarily follows.
Caledonian: The statement "x is true" could be properly reworded as "X corresponds with the world." The statement "I believe X" can be properly reworded as "X corresponds with my mental state." Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.
There will be a great degree of overlap between these two correspondence relations. Most of our beliefs, ...
Constant,
I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.
In the case of the coin toss, we can say that if we positively, absolutely know that the measure on the...
I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.
I haven't actually made a statement about frequencies of outcomes. So far I've only been talking about the physi...
Probability isn't a function of an individual -- it's a function of the available information. It's also a function of the individual. For one thing, it depends on initial priors and cputime available for evaluating the relevant information. If we had enough cputime, we could build a working AI using AIXItl.
Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.
Yes, but - and here's the important part - what's being described as "in my brain" is an asserted correspondence between a statement and the world. Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.
Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.
No. A belief can be wrong, right? I can believe in the existence of a unicorn even if the world does not actually contain unicorns. Belief does not, therefore, necessarily imply existence. Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case). Thus, belief does not necessarily follow from truth.
If all you are saying is that truth conditionally implies belief, and vice vers...
No. A belief can be wrong, right?So can an assertion. Just because you assert "snow is white" does not mean that snow is white. It means you believe that to be the case. Technically, asserting that you believe snow to be white does not mean you do - but it's a pretty safe bet.
Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case).
Yes, but you didn't assert those things. If you had asserted "my wife is having an affair", we would conclude that you b...
Constant,
I see that I misinterpreted your "proportion or fraction" terminology as referring to outcomes, whereas you were actually referring to a labeling of the phase space of the system. In order to figure out if we're really disagreeing about anything substantive, I have to ask this question -- in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?
Sebastian Hagen,
I'm a "logical omniscience" kind of Bayesian, so the distinction you're making falls into the "in theory, theory and and practice are the same, but in practice, they're not" category. This is sort of like using Turing machines as a model of computation even though no computer we actually use has infinite memory.
If we had enough cputime, we could build a working AI using AIXItl.
Threadjack
People go around saying this, but it isn't true:
1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.
2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.
3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.
People go around saying this, but it isn't true: ... I stand corrected. I did know about the first issue (from one of Eliezer's postings elsewhere, IIRC), but figured that this wasn't absolutely critical as long as one didn't insist on building a self-improving AI, and was willing to use some cludgy workarounds. I hadn't noticed the second one, but it's obvious in retrospect (and sufficient for me to retract my statement).
in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?
In a deterministic universe (about which I presume you to be talking because you are talking about initial conditions), the initial conditions determine the precise outcome (in complete detail), just as the outcome, in its turn, determines the initial conditions (i.e., given the deterministic laws and given the precise outcome, the initial conditions must be such-and-such). The precise outcome logically determines t...
"because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations."
Self-modifying systems are Turing-equivalent to non-self-modifying systems. Suppose you have a self-modifying TM, which can have transition functions A1,A2,...An. Take the first Turing machine, and append an additional ceil(log(n)) bits to the state Q. Then construct a new transition function by summing together the Ai: take A1 and append (0000... 0) to the Q, take A2 and append (0000... 1) ...
Constant,
If I understand you correctly, we've got two different types of things to which we're applying the label "probability":
(1) A distribution on the phase space (either frequency or epistemic) for initial conditions/precise outcomes. (We can evolve this distribution forward or backward in time according to the dynamics of the system.) (2) An "objective probability" distribution determined only the properties of the phase space.
I'm just not seeing why we should care about anything but distributions of type (1). Sure, you can put a u...
"Tom, your statement is true but completely irrelevant."
There's nothing in the AIXI math prohibiting it from understanding self-reference, or even taking drugs (so long as such drugs don't affect the ultimate output). To steal your analogy, AIXI may be automagically immune to anvils, but that doesn't stop it from understanding what an anvil is, or whacking itself on the head with said anvil (ie, spending ten thousand cycles looping through garbage before returning to its original calculations).
Cyan - Here's how I see it. Your toy world in effect does not move. You've defined the law so that everything shifts left. But from the point of view of the objects themselves, there is no motion, because motion is relative (recall that in our own world, motion is relative; every moving object has its own rest frame). Considered from the inside, your world is equivalent to [0,1] where x(t) = x_0. Your world is furthermore mappable one-to-one in a wide variety of ways to intervals. You can map the left half to itself (i.e., [0,.5]) and map the right half t...
"The second 'bug' is even stranger. A heuristic arose which (as part of a daring but ill-advised experiment EURISKO was conducting) said that all machine-synthesized heuristics were terrible and should be eliminated. Luckily, EURISKO chose this very heuristic as one of the first to eliminate, and the problem solved itself."
I know it's not strictly comparable, but reading a couple of comments brought this to mind.
Constant,
You haven't yet given me a reason to care about "objective probability" in my inferences. Leaving that aside -- if I understand your view correctly, your claim is that in order for a system to have an "objective probability", a system must have an "intrinsic geometry". Gotcha. Not unreasonable.
What is "intrinsic geometry" when translated into math? (Is it just symmetry? I'd like to tease apart the concepts of symmetry and "objective probability", if possible. Can you give an example of a system equ...
Why does your reasoning not apply to the coin toss? What's the mathematical property of the motion of the coin that motion in my system does not possess?
The coin toss is (or we could imagine it to be) a deterministic system whose outcomes are entirely dependent on its initial states. So if we want to talk about probability of an outcome, we need first of all to talk about the probability of an initial state. The initial states come from outside the system. They are not supplied from within the system of the coin toss. Tossing the coin does not produce its ...
There's a long story at the then of The Mind's Eye (or is it The Mind's I? in which someone asks a question:
"What colour is this book.?"
"I believe it's red."
"Wrong"
There follows a wonderfully convoluted dialogue. The point seems to be that someone who believes the book is red would say "It's red," rather than "I believe it's red."
This seems like a dead thread, but I'll chance it anyway.
Elizer, there's something off about your calculation of the expected score:
The expected score is something that should go up the more certain I am of something, right?
But in fact the expected score is highest when I'm most uncertain about something: If I believe with equal probability that snow might be white and non-white, the expected score is actually 0.5(-1) + 0.5(-1) = -1. This is the highest possible expected score.
In any other case, the expected score will be lower, as you calculate for the 70/30 case.
It seems like what you should be trying to do is minimize your expected score but maximize your actual score. That seems weird.
Consider the archetypal postmodernist attempt to be clever:
I believe the correct term here is "straw postmodernist", unless of course you're actually describing a real (and preferably citable) example.
Truth is the one of the two face values of the connection between an assertion and the reality. "Knowing" that X (statement) is true is the realization (maybe right or wrong, information to which the agent has no access to) of the existence of the connection in a way of gathering enough information to do that. "Truth" here is value (of the connection) such that the assertion completely describes the objective reality to our relevance.
"Believing" is making another artificial connection between the assertion (not the reality) an...
To be charitable to the postmodernists, they are overextending a perfectly legitimate defense against the Mind Projection Fallacy. If you take a joke, and tell it to two different audiences, in many cases one audience laughs at the joke and the other doesn't. Postmodernists correctly say that different audiences have different truths to "this joke is funny" and this state of affairs if perfectly normal. Unfortunately, they proceed to run away with this, and extend it to statements where the "audience" would be reality. Or very ...
I suggest that a primary cause of confusion about the distinction between "belief", "truth", and "reality" is qualitative thinking about beliefs.
Consider the archetypal postmodernist attempt to be clever:
No, different societies have different beliefs. Belief is of a different type than truth; it's like comparing apples and probabilities.
No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.
And that's what I mean by putting my finger on qualitative reasoning as the source of the problem. The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.
So let's use quantitative reasoning instead. Suppose that I assign a 70% probability to the proposition that snow is white. It follows that I think there's around a 70% chance that the sentence "snow is white" will turn out to be true. If the sentence "snow is white" is true, is my 70% probability assignment to the proposition, also "true"? Well, it's more true than it would have been if I'd assigned 60% probability, but not so true as if I'd assigned 80% probability.
When talking about the correspondence between a probability assignment and reality, a better word than "truth" would be "accuracy". "Accuracy" sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?
To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.
So if snow is white, my belief "70%: 'snow is white'" will score -0.51 bits: Log2(0.7) = -0.51.
But what if snow is not white, as I have conceded a 30% probability is the case? If "snow is white" is false, my belief "30% probability: 'snow is not white'" will score -1.73 bits. Note that -1.73 < -0.51, so I have done worse.
About how accurate do I think my own beliefs are? Well, my expectation over the score is 70% * -0.51 + 30% * -1.73 = -0.88 bits. If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.
All this should not be confused with the statement "I assign 70% credence that 'snow is white'." I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief. If so I'll expect my meta-belief "~1: 'I assign 70% credence that "snow is white"'" to score ~0 bits of accuracy, which is as good as it gets.
Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs. Snow is out there, my beliefs are inside me. I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow. (Though beliefs about beliefs are not always accurate.)
Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe "'snow is white' is true", and believe "my belief '"snow is white" is true' is correct", etc. Since all the quantities involved are 1, it's easy to mix them up.
Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking "'"snow is white" with 70% probability' is true", which is a type error. It is a true fact about you, that you believe "70% probability: 'snow is white'"; but that does not mean the probability assignment itself can possibly be "true". The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.
The cognoscenti will recognize "'"snow is white" with 70% probability' is true" as the mistake of thinking that probabilities are inherent properties of things.
From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs. When you see the world, you are experiencing a belief from the inside. When you notice yourself believing something, you are experiencing a belief about belief from the inside. So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.
When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality. When you think in probabilities about the world, your beliefs will be represented with probabilities ∈ (0, 1). Unlike the truth-values of propositions, which are in {true, false}. As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0). Your probabilities about your beliefs will typically be extreme. And things themselves—why, they're just red, or blue, or weighing 20 pounds, or whatever.
Thus we will be less likely, perhaps, to mix up the map with the territory.
This type distinction may also help us remember that uncertainty is a state of mind. A coin is not inherently 50% uncertain of which way it will land. The coin is not a belief processor, and does not have partial information about itself. In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like "The coin will land heads". This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.
But even under qualitative reasoning, to say that the coin itself is "true" or "false" would be a severe type error. The coin is not a belief, it is a coin. The territory is not the map.
If a coin cannot be true or false, how much less can it assign a 50% probability to itself?