# What Are Probabilities, Anyway?

In Probability Space & Aumann Agreement, I wrote that probabilities can be thought of as weights that we assign to possible world-histories. But what are these weights supposed to mean? Here I’ll give a few interpretations that I've considered and held at one point or another, and their problems. (Note that in the previous post, I implicitly used the first interpretation in the following list, since that seems to be the mainstream view.)

- Only one possible world is real, and probabilities represent beliefs about which one is real.
- Which world gets to be real seems arbitrary.
- Most possible worlds are lifeless, so we’d have to be really lucky to be alive.
- We have no information about the process that determines which world gets to be real, so how can we decide what the probability mass function p should be?

- All possible worlds are real, and probabilities represent beliefs about which one I’m in.
- Before I’ve observed anything, there seems to be no reason to believe that I’m more likely to be in one world than another, but we can’t let all their weights be equal.

- Not all possible worlds are equally real, and probabilities represent “how real” each world is. (This is also sometimes called the “measure” or “reality fluid” view.)
- Which worlds get to be “more real” seems arbitrary.
- Before we observe anything, we don't have any information about the process that determines the amount of “reality fluid” in each world, so how can we decide what the probability mass function p should be?

- All possible worlds are real, and probabilities represent how much I care about each world. (To make sense of this, recall that these probabilities are ultimately multiplied with utilities to form expected utilities in standard decision theories.)
- Which worlds I care more or less about seems arbitrary. But perhaps this is less of a problem because I’m “allowed” to have arbitrary values.
- Or, from another perspective, this drops another another hard problem on top of the pile of problems called “values”, where it may never be solved.

As you can see, I think the main problem with all of these interpretations is arbitrariness. The unconditioned probability mass function is supposed to represent my beliefs before I have observed anything in the world, so it must represent a state of total ignorance. But there seems to be no way to specify such a function without introducing *some* information, which anyone could infer by looking at the function.

For example, suppose we use a universal distribution, where we believe that the world-history is the output of a universal Turing machine given a uniformly random input tape. But then the distribution contains the information of which UTM we used. Where did that information come from?

One could argue that we do have some information even before we observe anything, because we're products of evolution, which would have built some useful information into our genes. But to the extent that we can trust the prior specified by our genes, it must be that evolution approximates a Bayesian updating process, and our prior distribution approximates the posterior distribution of such a process. The "prior of evolution" still has to represent a state of total ignorance.

These considerations lead me to lean toward the last interpretation, which is the most tolerant of arbitrariness. This interpretation also fits well with the idea that expected utility maximization with Bayesian updating is just an approximation of UDT that works in most situations. I and others have already motivated UDT by considering situations where Bayesian updating doesn't work, but it seems to me that even if we set those aside, there is still reason to consider a UDT-like interpretation of probability where the weights on possible worlds represent how much we care about those worlds.

## Comments (78)

Best*11 points [-]In order answer questions like "What are X, anyway?", we can (phenomenologically) turn the question into something like "What can we do with X?" or "What consequences does X have?"

For example, consider the question "What are ordered pairs, anyway?". Sometimes you see "definitions" of ordered pairs in terms of set theory. Wikipedia says that the standard definition of ordered pairs is:

(a, b) := {{a}, {a, b}}

Many mathematicians find this "definition" unsatisfactory, and view it not as a definition, but an encoding or translation. The category-theoretic notion of a product might be more satisfactory. It pins down the properties that the ordered pair already had before the "definition" was proposed and in what sense ANY construction with those properties could be used. Lambda calculus has a couple constructions that look superficially quite different from the set-theory ones, but satisfy the category-theoretic requirements.

I guess this is a response at the meta level, recommending this sort of "phenomenological" lens as the way to resolve these sort of questions.

*2 points [-]... as does the set-theoretic one.

ETA: Now that I read more closely, you didn't imply otherwise.

Lumping probabilities in with utilities sounds pretty close to Vladimir Nesov's Representing Preference by Probability Measures.

This word "possible" carries a LOT of hidden baggage. If math tells us anything its that LOTS of things SEEM possible to us because we aren't logically omniscient but aren't really possible.

While we're at it, how about we drop "worlds" from the mix. I don't think it adds anything. If we replace it with "information flows" do things work better?

Do you mean something precise by "information flows"?

Possible world is a standard term in several related fields, such as philosophy and linguistics. Are you arguing against my particular usage, or all usage of the term in general?

Comment deleted11 December 2009 07:16:44AM*[-]In this view, rationality doesn't play a role in choosing the initial weights on the possible universes. That job would be handed over to moral philosophy, just like choosing the right utility function already is.

No, thinking it doesn't make it so. Even in this view, the right beliefs and decisions aren't arbitrary, because they depend in a lawful way on your preferences. You still want to be rational in order to make the best decisions to satisfy your preferences.

Comment deleted12 December 2009 02:40:09AM*[-]*2 points [-]Presumably you don't do that because that's not your actual prior - you don't just care about one particular possible world where things happen to turn out exactly the way you want. You also care about other possible worlds and want to make decisions in ways that make those worlds better.

It would be for the same reason that you don't change your utility function to give everything an infinite utility.

Comment deleted12 December 2009 06:14:52AM*[-]*4 points [-]It sounds like you're assuming that people use a wishful-thinking prior by default, and have to be argued into a complexity-based prior. This seems implausible to me.

I think the phenomenon of wishful thinking doesn't come from one's prior, but from evolution being too stupid to design a rational decision process. That is, a part of my brain rewards me for increasing the anticipation of positive future experiences, even if that increase is caused by faulty reasoning instead of good decisions. This causes me to engage in wishful thinking (i.e., miscalculating the implications of my prior) in order to increase my reward.

I dispute this. Sure, some of the implications of the complexity prior are counterintuitive, but it would be surprising if none of them were. I mean, some theorems of number theory are counterintuitive, but that doesn't mean integers are aliens to the human mind.

Suppose someone gave you a water-tight argument that all possible world are in fact real, and you have to make decisions based on which worlds you care more about. Would you really adopt the "wishful-thinking" prior and start putting all your money into lottery tickets or something similar, or would your behavior be more or less unaffected? If it's the latter, don't you already care more about worlds that are simple?

Perhaps this is just one of the ways an algorithm that cares about each world in proportion to its inverse complexity could feel from the inside?

Comment deleted12 December 2009 08:44:20PM*[-]You don't believe in affirmations? The self-help books about the power of positive thinking don't work for you? What do you make of the following quote?

"Personal optimism correlates strongly with self-esteem, with psychological well-being and with physical and mental health. Optimism has been shown to be correlated with better immune systems in healthy people who have been subjected to stress."

Comment deleted12 December 2009 09:04:41PM [-]Probability is Subjectively Objective.

Isn't that conflating instrumental rationality and epistemic rationality?

Epistemic rationality can be seen as a kind of instrumental rationality. See Scoring rule, Epistemic vs. Instrumental Rationality: Approximations.

You seem to be confusing plausibility with possibility. The existence of God seems plausible to many people, but whether or not the existence of God is truly possible is not clear. Reasonable people believe that God is impossible, others that God is possible, and others that God is necessary (i.e. God's nonexistance is impossible).

*-1 points [-]It wouldn't quite throw all of our shit in the fan. If you know you're living in a QM many worlds universe you still have to optimize the borne probabilities, for example.

I think we can rule out the popular religions as being impossible worlds, but simulated worlds are possible worlds, and in some subset of them, you can know this.

In the one's where you can know differentiate to some degree, there are certainly actions that one could take to help his 'simulated' selves at the cost of the 'nonsimulated' selves, if you cared.

I guess the question is of whether it's even consistent to care about being "simulated" or not, and where you draw the line (what if you have some information rate in from the outside and have some influence over it? What if its the exact same hardware just pluggged in like in 'the matrix'?)

My guess is that it is gonna turn out to not make any sense to care about them differently, and that theres some natural weighting which we haven't yet figured out. Maybe weight each copy by the redundancy in the processor (eg if each transistor is X atoms big, then that can be thought of X copies living in the same house) or by the power they have to influence the world, or something. Both of those have problems, but I can't think of anything better.

Comment deleted13 December 2009 01:51:12AM [-]*0 points [-]True...

The paper does a much more thorough job than I, but the summary is that the only consistent way to carve is into borne probabilities, so you have to weight branches accordingly. I think this has to due with the amplitude squared being conserved, so that the ebborians equivalent would be their thickness, but I admit some confusion here.

This means there's at least some sense of probability in which you don't get to 'wish away', though it's still possible to only care about worlds where "X" is true (though in general you

actually docare about the other worlds)Comment deleted16 December 2009 08:49:17AM [-]It means that if you are in one, probability does not come down to only preferences. I suppose that since you can never be absolutely sure you're in one, you still have to find out your weightings between worlds where there might be nothing but preferences.

The other point is that I seriously doubt there's anything built into you that makes you not care about possible worlds where QM is true, so even if it does come down to 'mere preferences', you can still make mistakes.

The existence of an objective weighting scheme within one set of possible worlds gives me some hope of an objective weighting between all possible worlds, but note all that much, and it's not clear to me what that would be. Maybe the set of all possible worlds is countable, and each world is weighted equally?

Comment deleted17 December 2009 08:20:04AM [-]Yeah, but the confusion gets better as the worlds become more similar. How to weight between QM worlds and nonQM worlds is something I haven't even seen an attempt to explain, but how to weight within QM worlds has been explained, and how to weight in the sleeping beauty problem is quite straight forward.

I meant countable, but now that you mention it I think I should have said finite- I'll have to think about this some more.

We can't? Why not? Estimating the probability of two heads on two coinflips as 25% is giving existence in worlds with heads-heads, heads-tails, tails-heads, and tails-tails equal weight. The same is true of a more complicated proposition like "There is a low probability that Bigfoot exists" - giving every possible arrangement of objects/atoms/information equal weight, and then ruling out the ones that don't result in the evidence we've observed, few of these worlds contain Bigfoot.

Without an arbitrary upper bound on complexity, there are infinitely many possible arrangements.

Theoretically, it's not infinite because of the granularity of time/space, speed of light, and so on.

Practically, we can get around this because we only

careabout a tiny fraction of the possible variation in arrangements of the universe. In a coin flip, we only care about whether a coin is heads-up or tails-up, not the energy state of every subatomic particle in the coin.This matters in the case of a biased coin - let's say biased towards heads 66%. This, I think, is what Wei meant when he said we couldn't just give equal weights to all possible universes - the ones where the coin lands on heads and the ones where it lands on tails. But I think "universes where the coin lands on heads" and "universes where the coin lands on tails" are unnatural categories.

Consider how the probability of winning the lottery isn't .5 because we choose with equal weight between the two alternatives"I win" and "I don't win". Those are unnatural categories, and instead we need to choose with equal weight between "I win", "John Q. Smith of Little Rock Arkansas wins", "Mary Brown of San Antonio, Texas, wins" and so on to millions of other people. The unnatural category "I don't win" contains millions of more natural categories.

So on the biased coin flip, the categories "the coin lands heads" and "the coin lands tails" contains a bunch of categories of lower-level events about collisions of air molecules and coin molecules and amounts of force one can use to flip a coin, and two-thirds of those events are in the "coin lands heads" category. But among

thoselower-level events, you choose with equal weight.True, beneath these lower-level categories about collisions of air molecules, there are probably even lower things like vibrations of superstrings or bits in the world-simulation or whatever the lowest level of reality is, but as long as these behave mathematically I don't see why they prevent us from basing a theory of probability on the effects of low level conditions.

*2 points [-]These initial weights are supposed to be assigned before taking into account anything you have observed. But even now (under the second interpretation in my list) you can't be sure that the world you're in is finite. So, suppose there is one possible world for each integer in the set of all integers, or one possible world for each set in the class of all sets. How could one assign equal weight to all possible worlds, and have the weights add up to 1?

I don't think that gets around the problem, because there is an infinite number of possible worlds where the energy state of nearly every subatomic particle encodes some valuable information.

By the same method we do calculus. Instead of sum of the possible worlds we integrate over the possible worlds (which is a infinite sum of infinitesimally small values). For explicit construction on how this is done any basic calculus book is enough.

My understanding is that it's possible to have a uniform distribution over a finite set, or an interval of the reals, but not over all integers, or all reals, which is why I said in the sentence before the one you quotes, "suppose there is one possible world for each integer in the set of all integers."

There is a 1:1 mapping between "the set of reals in [0,1]" and "the set of all reals". So take your uniform distribution on [0,1] and put it through such a mapping... and the result is non-uniform. Which pretty much kills the idea of "uniform <=> each element has the same probability as each other".

There is no such thing as a continuous distribution on a

setalone, it has to be on a metric space. Even if you make a metric space out of the set of all possible universes, that doesn't give you a universal prior, because you have to choose what metric it should be uniform with respect to.(Can you have a uniform "continuous" distribution without a continuum? The rationals in [0,1]?)

As there is the 1:1 mapping between set of all reals and unit interval we can just use the unit interval and define a uniform mapping there. As whatever distribution you choose we can map it into unit interval as Pengvado said.

In case of set of all integers I'm not completely certain. But I'd look at the set of computable reals which we can use for much of mathematics. Normal calculus can be done with just computable reals (set of all numbers where there is an algorithm which provides arbitrary decimal in a finite time). So basically we have a mapping from computable reals on unit interval into set of all integers.

Another question is that is the uniform distribution the entropy maximising distribution when we consider set of all integers?

From a physical standpoint why are you interested in countably infinite probability distributions? If we assume discrete physical laws we'd have finite amount of possible worlds, on the other hand if we assume continuous we'd have uncountably infinite amount which can be mapped into unit interval.

From the top of my head I can imagine set of discrete worlds of all sizes which would be countably infinite. What other kinds of worlds there could be where this would be relevant?

(Nitpick: Spacetime isn't quantized AFAIK in standard physics, and then there are still continuous quantum amplitudes.)

I thought Wei was talking about single worlds (whatever those may be), not sets of worlds. Applied to sets of worlds, this seems correct.

Yvain said the finiteness well, but I think the "infinitely many possible arrangements" needs a little elaboration.

In any continuous probability distributions we have infinitely many (actually uncountably infinitely many) possibilities, and this makes the probability of any single outcome 0. Which is the reason why, in the case of continuous distributions, we talk about probability of the outcome being on a certain interval (a collection of infinitely many arrangements).

So instead of counting the individual arrangements we calculate integrals over some set of arrangements. Infinitely many arrangements is no hindrance to applying probability theory. Actually if we can assume continuous distribution it makes some things much easier.

Good point. Does this work over all infinite sets, though? Integers? Rationals?

It does work, actually if we're using Integers (there are as many integers as Rationals so we don't need to care about the latter set) we get the good old discrete probability distribution where we either have finite number of possibilities or at most countable infinity of possibilities, e.g set of all Integers.

Real numbers are strictly larger set than integers, so in continuous distribution we have in a sense more possibilities than countably infinite discrete distribution.

*0 points [-]Your getting yourself in trouble because you assume that puzzling questions must have deep answers when usually the question itself is flawed or misleading. In this case there just seems to be a need for any explanation of the kind you offer nor would be of any use anyway.

These 'explanations' you offer of probability aren't really explaining anything. Certainly

we do succesfully use probability to reason about systems that behave in a deterministic classical fashion (rolling dice probably counts). No matter what sort of probability you believe in you have to explain that application. So introducing 'objective' probability merely adds things we need to explain (possible worlds etc..).The correct approach is to step back and ask what is it that needs explaining. Well probability is really nothing but a fancy way of counting up outcomes. So once we justify describing the world in a probabilistic fashion (even when it's deterministic in some sense) the application of mathematical inference to reformulate that description in more useful ways is untroubling. In other words if it's reasonable to model rolling two six sided dice as being independent uniformly random variables on 1...6 counting up the combinations and saying there is a 1/6 chance of getting a 7 doesn't raise any new difficulties.

So the question just comes down to is it reasonable of us to model the world using random variables?. I mean one might worry that some worlds were deeply 'tricky' in that almost always when it appeared two objects behaved like independent random variables in reality there was some hidden correlation that would eventually pop out to bite you in the ass and then once you'd taken that correlation into account another one would bite you and so on and so on.

But if you think about it for awhile this isn't really so much a question about the nature of the world as it is a purely mathematical question. If we keep factoring out by our best predictions will the remaining unaccounted for variation in outcomes appear to be random, i.e., make modeling it as random variables an accurate way to make predictions? Well that's actually kinda complicated, I have a theorem (well tiny tweak of someone else's theorem plus interpratation) which I believe says that yes indeed it must work this way. I won't go into it here but let me just say one thing to convince you of it's plausibility.

Basically the argument is that things only fail to look random because we notice a more accurate way of predicting their behavior. The only evidence for a sequence of observations failing to be random according to the supposed distribution would be a pattern in the observations not captured by R so would in turn yield a more accurate distribution. So basically the claim is that we can always simply divide up any observable into the part we can predict (i.e. a distribution of outcomes) and the part we can't. Once you mod out by the part you can predict by defintion anything left is totally unpredictable to you (e.g. computable machines) and thus can't detectably fail to look random according to it's distribution since that would be a better prediction.

This isn't rigorous (it's complicatd) but the point is that

Randomness is nothing but our inability to make any better predictionsWhy should probabilities

meananything? How how would you behave differently if you decided (or learned) a given interpretation was correct?As long as there's no difference, and your actions add up to normality under any of the interpretations, then I don't see why an interpretation is needed at all.

The different interpretations suggest different approaches to answer the question of "what is the right prior?" and also different approaches to decision theory. I mentioned that the "caring" interpretation fits well with UDT.

Can't you choose your (arational) preferences to get any behaviour (decision theory) no matter what interpretation you choose?

Preferences may be arational, but they're not completely arbitrary. In moral philosophy there are still arguments for what one's preferences should be, even if they are generally much weaker than the arguments in rationality. Different interpretations influence what kinds of arguments apply or make sense to you, and therefore influence your preferences.

*0 points [-]How can there be arguments about what preferences

shouldbe? Aren't they, well, a sort of unmoved mover, a primal cause? (To use some erstwhile philosophical terms :-)I can understand meta-arguments that say your preferences should be consistent in some sense, or that argue about subgoal preferences given some supergoals. But even under strict constraints of that kind, you have a lot of latitude, from humans to paperclip maximizers on out. Within that range, does interpreting probabilities differently really give you extra power you can't get by finetuning your prefs?

Edit: the reason I'd perfer editing prefs is that talking about the Meaning of Probabilities sets off my materialism sensors. It leads to things like multiple-world theories because they're easy to think about as an inetrpretation of QM, regardless of whether they actually exist. Then they can actually negatively affect our prefs or behavior.

Well, I don't know what many of my preferences should be. How can I find out except by looking for and listening to arguments?

No, not for humans anyway.

That implies there's some objectively-definable standard for preferences which you'll be able to recognize once you see it. Also, it begs the question of what in your current preferences says "I have to go out and get some more/different preferences!" From a goal-driven intelligence's POV, asking others to modify your prefs in unspecified ways is pretty much

theanti-rational act.I think we need to distinguish between what a rational agent should do, and what a non-rational human should do to become more rational. Nesov's reply to you also concerns the former, I think, but I'm more interested in the latter here.

Unlike a rational agent, we don't have well-defined preferences, and the preferences that we think we have

canbe changed by arguments. What to do about this situation? Should we stop thinking up or listening to arguments, and just fill in the fuzzy parts of our preferences with randomness or indifference, in order to emulate a rational agent in the most direct manner possible? That doesn't make much sense to me.I'm not sure what we should do exactly, but whatever it is, it seems like arguments must make up a large part of it.

Please see my reply to Nesov above, too.

I think we shouldn't try to emulate rational agents at all, in the sense that we shouldn't pretend to have rationality-style preferences and supergoals; as a matter of fact we don't have them.

Up to here we seem to agree, we just use different terminology. I just don't want to conflate rational preferences with human preferences because they the two systems behave very differently.

Just as an example, in signalling theories of behaviour, you may consciously believe that your preferences are very different from what your behaviour is actually optimizing for when noone is looking. A rational agent wouldn't normally have separate conscious/unconscious minds unless only the conscious part was sbuject to outside inspection. In this example, it makes sense to update signalling-preferences sometimes, because they're not your actual acting-preferences.

But if you consciously intend to act out your (conscious) preferences, and

alsointend to keep changing them in not-always-foreseeable ways, then that isn't rationality, and when there could be confusion due to context (such as on LW most of the time) I'd prefer not to use the term "preferences" about humans, or to make clear what is meant.*1 point [-]That arguments modify preference means that you are (denotationally) arriving at different preferences depending on arguments. This means that, from the perspective of a specific given preference (or "true" neutral preference not biased by specific arguments), you fail to obtain optimal rational decision algorithm, and thus to achieve high-preference strategy. But at the same time, "absence of action" is also an action, so not exploring the arguments may as well be a worse choice, since you won't be moving forward towards more clear understanding of your own preference, even if the preference that you are going to understand will be somewhat biased compared to the unknown original one.

Thus, there is a tradeoff:

somepreference close to the original one, which allows to make more rational decisions, which is good for the original preference.*-1 points [-]FWIW, my preferences have not been changed by arguments in the last 20 years. So I don't think your "we" includes me.

*0 points [-]As an example, consider the arguments in form of proofs/disproofs of the statements that you are interested in. Information doesn't necessarily "change" or "determine arbitrarily" the things you take from it, it may help you to

computean object in which you are already interested, without changing that object, and at the same time be essential in moving forward. If you have an algorithm, it doesn't mean that you know what this algorithm will give you in the end, what the algorithm "means". Resist the illusion of transparency.I don't understand what you're saying as applied to this argument. That Wei Dai has an algorithm for modifying his preferences and he doesn't know what the end output of that algorithm will be?

There will always be something about preference that you don't know, and it's not the question of modifying preference, it's a question of figuring out what the fixed unmodifiable preference implies. Modifying preference is exactly the wrong way of going about this.

If we figure out the conceptual issues of FAI, we'd basically have the algorithm that

isour preferences, but not in infinite and unknowable normal "execution trace" denotational "form".Re: "How can there be arguments about what preferences should be?"

The idea that some preferences are "better" than other ones is known as "moral realism".

Wikipedia says moral realists (in general) claim that moral propositions can be true or false as objective facts

buttheir truth cannot be observed or verified. This doesn't make any sense. Sounds like religion.*1 point [-]Are you looking at http://en.wikipedia.org/wiki/Moral_realism ...?

Care to quote an offending section about moral truths not being observervable or verifiable?

*-1 points [-]Under the section "Criticisms":

Regarding the emotivist criticism, it begs a lot of questions. Surely not

allnegative emotional reactions signal wrong moral actions. Besides, emotivism isn't aligned with moral realism.*1 point [-]I see - thanks.

That some

criticismsof moral realism appear to lack coherence does not seem to me to be a point that counts against the idea.I expect moral realists would deny that morality is any more nonmaterial than any other kind of information - and would also deny that it does not appear to be accessible to the scientific method.

It might also sound like science - don't scientists generally claim that propositions about the world can be true or false, but cannot be directly observed or verified?

Joshua Greene's thesis "The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it" might be a decent introduction to moral realism / irrealism. Overall it is an argument for irrealism.

*0 points [-]In science, a proposition about the world can generally be proven or disproven with arbitrary probability, so you can become as sure about it as you like if you invest enough resources.

In moral realism, propositions are purely logical constructs, and can be proven true or false just like a mathematica proposition. Their truth is one with the truth of the axioms used, and the axioms can't be proven or disproven with any degree of certainty; they are simply accepted or not accepted. The morality is internally consistent, but you can't derive it from the real world, and you can't derive any fact about the real world from the morality. That sounds just like theology to me. (The difference between this and ordinary math or logic, is that mathematical constructs aren't supposed to lead to

shouldoroughtstatements about behavior.)I will read Greene's thesis, but as far as I can tell it argues against moral realism (and does it well), so it won't help me understand why anyone would believe in it.

Hmmm - caring as a part of reality? Why not just flip things up, and consider that emotion is also part of reality. Random by any other name. Try to exclude it and you'll find you can't no matter how infinitely many worlds you suppose. There's also calculus to irrationality . . .

*4 points [-]The "caring" interpretation doesn't say that caring is part of reality (except insofar as minds are implemented in reality). Rather, it says that probability

isn'tpart of reality, it's part of decision theory (again except insofar as minds are implemented in reality).cool! but can you really posit artificial intelligence (decision theory has to get enacted somewhere) and not allow mind as part of reality?

This view seems appealing to me, because 1) deciding that all possible worlds are real seems to follow from the Copernican principle, and 2) if all worlds are real from the perspective of their observers, as you said it seems arbitrary to say which worlds are more real.

But on this view, what do I do with the observed frequencies of past events? Whenever I've flipped a coin, heads has come up about half the time. If I accept option 4, am I giving up on the idea that these regularities mean anything?

*0 points [-]What does

realeven mean, by the way? Interpretation 1 withrealtaken to mean ‘of or pertaining to the world I'm in’ (as I would) is equivalent to Interpretation 2 withrealtaken to mean ‘possible’ (as Tegmark would, IIUC) and to Interpretation 3 withrealtaken to mean ‘likely’ and to Interpretation 4 withrealtaken to mean ‘important to me’.It depends. We use the term "probability" to cover a variety of different things, which can be handled by similar mathematics but are not the same.

For example, suppose that I'm playing blackjack. Given a certain disposition of cards, I can calculate a probability that asking for the next card will bust me. In this case the state of the world is fixed, and probability measures my ignorance. The fact that I don't know which card would be dealt to me doesn't change the fact that there's a specific card on the top of the deck waiting to be dealt. If I knew more about the situation (perhaps by counting cards) I might have a better idea of which cards could possibly be on top of the deck, but the same card would still be on top of the deck. In this situation, case 1 applies from the choices above.

Alternately consider photons going through a double slit in the classical quantum physics experiment. If the holes are of equal size and geometry, a photon has a 50% chance of passing through each slit (the probabilities can be adjusted, for example by changing the width of one slit). One of the basic results of quantum physics is that the profile of the light through both slits is not the same as the sum of the profiles of the light through each slit. In general, it is not possible to say which slit a given photon when through, and attempting to make that measurement changes the answer. In this situation, case 3 of the above post seems to apply.

My point is that the post's question can't be answered for probabilities in general. It depends.

2 and 4 are much the same if you only care about worlds you are in.

*0 points [-]Could you elaborate on what it means to have a given amount of "care" about a world? For example, suppose that I assign (or ought to assign) probability 0.5 to a coin's coming up heads. How do you translate this probability assignment into language involving amounts of care for worlds?

You care equally for your selves that see heads and your selves that see tails. If you don't care what happens to you after you see heads, then you would assign probability one to tails. Of course, you'd be wrong in about half the worlds, but hey, no skin off

yournose.You'rethe one who sees tails. Those other guys ... they don't matter.*3 points [-]A bizarre interpretation.

For example, caring about "living until tomorrow" does not normally mean assigning a zero probability to death in the interim. If anything that would tend to make you fearless - indifferent to whether you stepped in front of a bus or not - the very opposite of what we normally mean by "caring" about some outcome.

Thanks. That makes it a lot clearer.

It seems like this "caring" could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the "heads" worlds even after I learned that I'm not in them. Wouldn't I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?

I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.

See http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ for more details. Many of my later posts can be considered explanations/justifications for the "design choices" I made in that post.

*0 points [-]The post would be much better if a definition of "

possible world" was given. When giving definitions, perhaps to define what does "real" precisely mean would be beneficial.More or less, I interpret "reality" as all things which can be observed. "Possible", in my language", is something which I can imagine and which doesn't contradict facts that I already know. This is somewhat subjective definition, but possibility obviously depends subjective knowledge. I have flipped a coin. Before I have looked at the result, it was possible that it came up heads. After I have looked at it, it's clear that it came up tails, heads are impossible.

Needless to say, people rarely imagine whole worlds. Rather, they use the word "possible" when speculating about unknow parts of this world. Which may be confusing, since our intuitive understanding of the word doesn't match its use.

Even if defined somehow objectively (as e.g.

possible world is any world isomorphic to a formal system with properties X), it seems almost obvious that real world(s) and possible worlds are different categories. If not, there is no need to have distinct names for them.So before creating theories about what probability means, I suggest we unite the language. These things have been discussed here already several times, but I don't think there is a consensus in interpretation of "possible", "real", "world", "arbitrary". And, after all, I am not sure whether "probability" even should be interpreted using these terms. It almost feels like "probability" is a more fundamental term than "possible" or "arbitrary".

I must admit that I am biased against "possible worlds" and similar phrases, because they tend to appear mostly in theological and philosophical discussions, whose rather empty conclusions are dissatisfactory. I am afraid of lack of guidelines strong enough to keep thinking in limits of rationality.