Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: kmccarty 25 April 2011 07:42:05AM *  6 points [-]

I think the temptation is very strong to notice the distinction between the elemental nature of raw sensory inputs and the cognitive significance they are the bearers of. And this is so, and is useful to do, precisely to the extent that the cognitive significance will vary depending on context and background knowledge, such as light levels, perspective, etc. because those serve as dynamically updated calibrations of cognitive significance. But these calibrations become transparent with use, so that we see, hear and feel vividly and directly in three dimensions because we have learned that that is the cognitive significance of what we see, hear, feel and navigate through. Subjective experience comes cooked and raw in the same dish. It then takes an analytic effort of abstraction of a painter's eye to notice that it takes an elliptical shape on a focal plane to induce the visual experience of a round coin on a tabletop. Thus ambiguities, ambivalences and confusions abound about what constitutes the contents of subjective experience.

I'm reminded of an experiment I read about quite some time ago in a very old Scientific American I think, in which (IIRC) psychology subjects were fitted with goggles containing prisms that flipped their visual fields upside down. They wore them for upwards of a month during all waking hours. When they first put them on, they could barely walk at all without collapsing in a heap because of the severe navigational difficulties. After some time, the visual motor circuits in their brains adapted, and some were even able to re-learn how to ride a bike with the goggles on. After they could navigate their world more or less normally, they were asked whether at anytime their visual field ever "flipped over" so that things started looking "right side up" again. No, there was no change, things looked the same as when they first put the goggles on. So then things still looked "upside down"? After a while, the subjects started insisting that the question made no sense, and they didn't know how to answer it. Nothing changed about their visual fields, they just got used to it and could successfully navigate in it; the effect became transparent.

(Until they took the goggles off after the experiment ended. And then they were again seriously disoriented for a time, though they recovered quickly.)

Comment author: AlephNeil 22 February 2011 05:53:04PM *  1 point [-]

I can't follow this. If "Tuesday exists" isn't indexical, then it's exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.

Well, in my toy model of the Doomsday Argument, there's only a 1/2 chance that Tuesday exists, and the only way that a person can know that Tuesday exists is to be alive on Tuesday. Do you still think there's a problem?

Indeed, unless you work within the confines of a finite toy model.

Even in toy models like Sleeping Beauty we have to somehow choose between SSA and SIA (which are precisely two rival methods for deriving centered from uncentered distributions.)

What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn't that the direction scientific method works in?

That's a very good, philosophically deep question! Like many lesswrongers, I'm what David Chalmers would call a "Type-A materialist" which means that I deny the existence of "subjective facts" which aren't in some way reducible to objective facts.

Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A "centered world" really just means an "uncentered world that happens to contain an ontologically basic, causally inert 'pointer' towards some being and an ontologically basic, causally inert catalogue of its "mental facts". However, because a "center" is causally inert, we can never acquire any evidence that the world has a "center".

(I'd like to say more but really this needs a lot more thought and I can see I'm already starting to ramble...)

Comment author: kmccarty 25 February 2011 07:04:48AM 2 points [-]

I'm what David Chalmers would call a "Type-A materialist" which means that I deny the existence of "subjective facts" which aren't in some way reducible to objective facts.

The concerns Chalmers wrote about focused on the nature of phenomenal experience, and the traditional dichotomy between subjective and objective in human experience. That distinction draws a dividing line way off to the side of what I'm interested in. My main concern isn't with ineffable consciousness, it's with cognitive processing of information, information defined as that which distinguishes possibilities, reduces uncertainty and can have behavioral consequences. Consequences for what/whom? Situated epistemic agents, which I take as ubiquituous constituents of the world around us, and not just sentient life-forms like ourselves. Situated agents that process information don't need to be very high on the computational hierarchy in order to be able to interact with the world as it is, use representations of the world as they take it to be, and entertain possibilities about how well their representations conform to what they are intended to represent. The old 128MB 286 I had in the corner that was too underpowered to run even a current version of linux, was powerful enough to implement an instantiation of a situated Bayesian agent. I'm completely fine with stipulating that it had about as much phenomenal or subjective experience as a chunk of pavement. But I think there are useful distinctions totally missed by Chalmers' division (which I'm sure he's aware of, but not concerned with in the paper you cite), between what you might call objective facts and what you might call "subjective facts", if by the latter you include essentially indexical and contextual information, such as de se and de dicto information, as well as de re propositions.

Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A "centered world" really just means an "uncentered world that happens to contain an ontologically basic, causally inert 'pointer' towards some being and an ontologically basic, causally inert catalogue of its "mental facts". However, because a "center" is causally inert, we can never acquire any evidence that the world has a "center".

(On Lewis's account, centered worlds are generalizations of uncentered ones, which are contained in them as special cases.) From the point of view of a situated agent, centered worlds are epistemologically prior, about as patently obvious as the existence of "True", "False" and "Don't Know", and the uncentered worlds are secondary, synthesized, hypothesized and inferred. The process of converting limited indexical information into objective, universally valid knowledge is where all the interesting stuff happens. That's what the very idea of "calibration" is about. To know whether they (centered worlds or the other kind) are ontologically prior it's just too soon for me to tell, but I feel uncomfortable prejudging the issue on such strict criteria without a more detailed exploration of the territory on the outside of the walled garden of God's Own Library of Eternal Verity. In other words, with respect to that wall, I don't see warrant flowing from inside out, I see it flowing from outside in. I suppose that's in danger of making me an idealist, but I'm trying to be a good empiricist.

Comment author: AlephNeil 21 February 2011 09:52:30PM *  1 point [-]

If that wasn't the event that entered into the Bayesian calculation, what was?

The Bayesian calculation only needs to use the event "Tuesday exists" which is non-indexical (though you're right - it is entailed by "today is Tuesday").

The problem with indexical events is that our prior is a distribution over possible worlds, and there doesn't seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones. (E.g. Are all people equally likely regardless of lifespan, brain power, state of wakefulness etc.? What if people are copied and the copies diverge from one another? Where does the first 'observer' appear in the tree of life? etc.)

Comment author: kmccarty 22 February 2011 04:53:32AM 1 point [-]

The Bayesian calculation only needs to use the event "Tuesday exists"

I can't follow this. If "Tuesday exists" isn't indexical, then it's exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.

there doesn't seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones.

Indeed, unless you work within the confines of a finite toy model. But why go in that direction? What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn't that the direction scientific method works in?

Comment author: AlephNeil 21 February 2011 02:33:23AM *  5 points [-]

I remember you linked me to Radford Neal's paper (pdf) on Full Non-indexical Conditioning. I think FNC is a much nicer way to think about problems like these than SSA and SIA, but I guess you disagree?

To save others from having to wade through the paper, which is rather long, I'll try to explain what FNC means:

First, let's consider a much simpler instance of the Doomsday Argument: At the beginning of time, God tosses a coin. If heads then there will only ever be one person (call them "M"), who is created, matures and dies on Monday, and then the world ends. If tails then there will be two people, one ("M") who lives and dies on Monday and another ("T") on Tuesday. As this is a Doomsday Argument, we don't require that T is a copy of M.

M learns that it's Monday but is given no (other) empirical clues about the coin. M says to herself "Well, if the coin is heads then I was certain to find myself here on Monday, but if it's tails then there was a 1/2 chance that I'd find myself experiencing a Tuesday. Applying Bayes' theorem, I deduce that there's a 2/3 chance that the coin is heads, and that the world is going to end before tomorrow."

Now FNC makes two observations:

  1. The event "it is Monday today" is indexical. However, an "indexical event" isn't strictly speaking an event. (Because an event picks out a set of possible worlds, whereas an indexical event picks out a set of possible "centered worlds".) Since it isn't an event, it makes no sense to treat it as 'data' in a Bayesian calculation.
  2. (But apart from that) the best way to do an update is to update on everything we know.

M takes these points to heart. Rather than updating on "it is Monday" she instead updates on "there once was a person who experienced [complete catalogue of M's mental state] and that person lived on Monday."

If we ignore the (at best) remote possibility that T has exactly the same experiences as M (prior to learning which day it is) then the event above is independent of the coin toss. Therefore M should calculate a posterior probability of 1/2 that the coin is heads.

On discovering that it's Monday, M gains no evidence that the end of the world is nigh. Notice that we've reached this conclusion independently of decision theory.

If M is 'altruistic' towards T, valuing him as much as she values herself, then she should be prepared to part with one cube of chocolate in exchange for a guarantee that he'll get two if he exists. If M is 'selfish' then the exchange rate changes from 1:2 to 1:infinity. These exchange rates are not probabilities. It would be very wrong to say something like "the probability that M gives to T's existence only makes sense when we specify M's utility function, and it in particular it changes from 1/2 to 0 if M switches from 'altruistic' to 'selfish'".

Comment author: kmccarty 21 February 2011 05:42:09AM *  0 points [-]

I suppose I'm being obtuse about this, but please help me find my way through this argument.

  1. The event "it is Monday today" is indexical. However, an "indexical event" isn't strictly speaking an event. (Because an event picks out a set of possible worlds, whereas an indexical event picks out a set of possible "centered worlds".) Since it isn't an event, it makes no sense to treat it as 'data' in a Bayesian calculation.

Isn't this argument confounded by the observation that an indexical event "It is Tuesday today", in the process of ruling out several centered possible worlds--the ones occurring on Monday--also happens to rule out an entire uncentered world? If it's not an event, how does it makes sense to treat it as data in a Bayesian calculation that rules out Heads? If that wasn't the event that entered into the Bayesian calculation, what was?

Comment author: kmccarty 23 December 2010 08:06:04PM 1 point [-]

There need be no information transferred.

I didn't quite follow this. From where to where?

But anyway, yes, that's correct that the referents of the two claims aren't the same. This could stand some further clarification as to why. In fact, Descendant's claim makes a direct reference to the individual who uttered it at the moment it's uttered, but Ancestor's claim is not about himself in the same way. As you say, he's attempting to refer to all of his descendants, and on that basis claim identity with whichever particular one of them happens to win the lottery, or not, as the case may be. (As I note above, this is not your usual equivalence relation.) This is an opaque context, and Ancestor's claim fails to refer to a particular individual (and not just because that individual exists only in the future). He can only make a conditional statement: given that X is whoever it is will win the lottery (or not), the probability that person will win the lottery (or not) is trivial. He lacks something that allows him to refer to Descendant outside the scope of the quantifier. Descendant does not lack this, he has what Ancestor did not have-- the wherewithal to refer to himself as a definite individual, because he is that individual at the time of the reference.

But a puzzle remains. On this account, Ancestor has no credence that Descendant will win the lottery, because he doesn't have the means to correctly formulate the proposition in which he is to assert a credence, except from inside the scope of a universal quantifier. Descendant does have the means, can formulate the proposition (a de se proposition), and can now assert a credence in it based on his understanding of his situation with respect to the facts he knows. And the puzzle is, Descendant's epistemic state is certainly different from Ancestor's, but it seems it didn't happen through Bayesian updating. Meanwhile, there is an event that Descendant witnessed that served to narrow the set of possible worlds he situates himself in (namely, that he is now numerically distinct from any of the other descendants), but, so the argument goes, this doesn't count as any kind of evidence of anything. It seems to me the basis for requiring diachronic consistency is in trouble.

Comment author: kmccarty 23 December 2010 10:05:48PM *  1 point [-]

On further reflection, both Ancestor and each Descendant can consider the proposition P(X) = "X is a descendant & X is a lottery winner". Given the setup, Ancestor can quantify over X, and assign probability 1/N to each instance. That's how the statement {"I" will win the lottery with probability 1} is to be read, in conjunction with a particular analysis of personal identity that warrants it. This would be the same proposition each descendant considers, and also assigns probability 1/N to. On this way of looking at it, both Ancestor and each descendant are in the same epistemic state, with respect to the question of who will win the lottery.

Ok, so far so good. This same way of looking at things, and the prediction about probability of descendants, is a way of looking at the Sleeping Beauty problem I tried to explain some months ago, and from what I can see is an argument for why Beauty is able to assert on Sunday evening what the credence of her future selves should be upon awakening (which is different from her own credence on Sunday evening), and therefore has no reason to change it when she later awakens on various occasions. It didn't seem to get much traction then, probably because it was also mixed in with arguments about expected frequencies.

Comment author: nshepperd 22 December 2010 09:31:09AM 0 points [-]

With respect to the descendant "changing their mind" on the probabilility of winning the lottery: when the descendant says "I will win the lottery" perhaps that is a different statement to when the ancestor says "I will win the lottery". For the ancestor, "I" includes all the ancestor's descendants. For descendant X, "I" refers to only X (and their descendants, if any). Hence the sense that there is an update occurring is an illusion; the quotation is the same, the referent is not. There need be no information transferred.

Comment author: kmccarty 23 December 2010 08:06:04PM 1 point [-]

There need be no information transferred.

I didn't quite follow this. From where to where?

But anyway, yes, that's correct that the referents of the two claims aren't the same. This could stand some further clarification as to why. In fact, Descendant's claim makes a direct reference to the individual who uttered it at the moment it's uttered, but Ancestor's claim is not about himself in the same way. As you say, he's attempting to refer to all of his descendants, and on that basis claim identity with whichever particular one of them happens to win the lottery, or not, as the case may be. (As I note above, this is not your usual equivalence relation.) This is an opaque context, and Ancestor's claim fails to refer to a particular individual (and not just because that individual exists only in the future). He can only make a conditional statement: given that X is whoever it is will win the lottery (or not), the probability that person will win the lottery (or not) is trivial. He lacks something that allows him to refer to Descendant outside the scope of the quantifier. Descendant does not lack this, he has what Ancestor did not have-- the wherewithal to refer to himself as a definite individual, because he is that individual at the time of the reference.

But a puzzle remains. On this account, Ancestor has no credence that Descendant will win the lottery, because he doesn't have the means to correctly formulate the proposition in which he is to assert a credence, except from inside the scope of a universal quantifier. Descendant does have the means, can formulate the proposition (a de se proposition), and can now assert a credence in it based on his understanding of his situation with respect to the facts he knows. And the puzzle is, Descendant's epistemic state is certainly different from Ancestor's, but it seems it didn't happen through Bayesian updating. Meanwhile, there is an event that Descendant witnessed that served to narrow the set of possible worlds he situates himself in (namely, that he is now numerically distinct from any of the other descendants), but, so the argument goes, this doesn't count as any kind of evidence of anything. It seems to me the basis for requiring diachronic consistency is in trouble.

Comment author: Jack 21 December 2010 08:20:09PM 1 point [-]

So obviously your person doesn't magically transfer from the current copy of you to future copies of you. Rather, those future persons are you because they are psychologically continuous with the present you. Now when you make multiple copies of yourself it isn't right to say that just one of them will be you. You may never experience both of them but from the perspective of each copy you are their past. So when all million copies of you wake up all of them will feel like they are the next stage of you. All of them will be right. Given that you know there will be a future stage of you that will win the lottery how can that copy (the copy that is the future stage of you that has won the lottery) be surprised? The copy has, in it's past, a memory of being told that there would be exactly one copy psychologically continuous with his past self. Of course, the winning copy will have some kind of self-awareness "Oh, I'm that copy" but of course it has a memory of expecting exactly that from the copy that won the lottery.

I may need to be providing a more extensive philosophical context about personal identity for this to make sense, I'm not sure.

Comment author: kmccarty 22 December 2010 08:19:23AM *  3 points [-]

I don't think personal identity is a mathematical equivalence relation. Specifically, it's not symmetric: "I'm the same person you met yesterday" actually needs to read "I was the same person you met yesterday"; "I will be the same person tomorrow" is a prediction that may fail (even assuming I survive that long). This yields failures of transitivity: "Y is the same person as X" and "Z is the same person as X" doesn't get you "Y is the same person as Z".

Given that you know there will be a future stage of you that will win the lottery how can that copy (the copy that is the future stage of you that has won the lottery) be surprised?

It's not the ancestor--he who is certain to have a descendant that wins the lottery--who wins the lottery, it's that one descendant of him who wins it, and not his other one(s). Once a descendant realizes he is just one of the many copies, he then becomes uncertain whether he is the one who will win the lottery, so will be surprised when he learns whether he is. I think the interesting questions here are

1) Consider the epistemic state of the ancestor. He believes he is certain to win the lottery. There is an argument that he's justified in believing this.

2) Now consider the epistemic state of a descendant, immediately after discovering that he is one of several duplicates, but before he learns anything about which one. There is some sense in which his (the descendant's) uncertainty about whether he (the descendant) will win the lottery has changed from what it was in 1). Aside: in a Bayesian framework, this means having received some information, some evidence on which to update. But the only plausible candidate in sight is the knowledge that he is now just one particular one of the duplicates, not the ancestor anymore (e.g., because he has just awoken from the procedure). But of course, he knew that was going to happen with certainty before, so some deny that he learns anything at all. This seems directly analogous to Sleeping Beauty's predicament.

3) Descendant now learns whether he's the one who's won the lottery. Descendant could not have claimed that with certainty before, so he definitely does receive new information, and updates accordingly (all of them do). There is some sense in which the information received at this point exactly cancels out the information(?) in 2).

A couple points:

Of course, Bayesians can't revise certain knowledge, so the standard analysis gets stuck on square 1. But I don't see that the story changes in any significant way if we substitute "reasonable certainty(epsilon)" throughout, so I'm happy to stipulate if necessary.

Bayesians have a problem with de se information: "I am here now". The standard framework on which Bayes' Theorem holds deals with de re information. De se and de dicto statements have to be converted into de re statements before they can be processed as evidence. This has to be done via various calibrations that adequately disambiguate possibilities and interpret contexts and occasions: who am I, what time is it, and where am I. This process is often taken for granted, because it usually happens transparently and without error. Except when it doesn't.

I may need to be providing a more extensive philosophical context about personal identity for this to make sense, I'm not sure.

I hope you do.

Comment author: kmccarty 21 December 2010 06:52:15AM 0 points [-]

I wonder if this can stand in for the Maher?

Depragmatized Dutch Book Arguments

Comment author: XiXiDu 18 December 2010 05:02:46PM 0 points [-]

I'd be interested to hear your thoughts on why you believe EY is incoherent? I thought that what EY said makes sense. Is the probability of a tautology being true 1? You might think that it is true by definition, but what if the concept is not even wrong, can you absolutely rule out that possibility? Your sense of truth by definition might be mistaken in the same way as the experience of a Déjà vu. The experience is real, but you're mistaken about its subject matter. In other words, you might be mistaken about your internal coherence and therefore assign a probability to something that was never there in the first place. This might be on-topic:

One can certainly imagine an omnipotent being provided that there is enough vagueness in the concept of what “omnipotence” means; but if one tries to nail this concept down precisely, one gets hit by the omnipotence paradox.

Nothing has a probability of 1, including this sentence, as doubt always remains, or does it? It's confusing for sure, someone with enough intellectual horsepower should write a post on it.

Comment author: kmccarty 19 December 2010 08:02:27AM 3 points [-]

Did I accuse someone of being incoherent? I didn't mean to do that, I only meant to accuse myself of not being able to follow the distinction between a rule of logic (oh, take the Rule of Detachment for instance) and a syntactic elimination rule. In virtue of what do the latter escape the quantum of sceptical doubt that we should apply to other tautologies? I think there clearly is a distinction between believing a rule of logic is reliable for a particular domain, and knowing with the same confidence that a particular instance of its application has been correctly executed. But I can't tell from the discussion if that's what's at play here, or if it is, whether it's being deployed in a manner careful enough to avoid incoherence. I just can't tell yet. For instance,

Conditioning on this tiny credence would produce various null implications in my reasoning process, which end up being discarded as incoherent

I don't know what this amounts to without following a more detailed example.

It all seems to be somewhat vaguely along the lines of what Hartry Field says in his Locke lectures about rational revisability of the rules of logic and/or epistemic principles; his arguments are much more detailed, but I confess I have difficulty following him too.

Comment author: ata 17 December 2010 05:36:10AM *  12 points [-]

What should we take for P(X|X) then?

He's addressed that:

The one that I confess is giving me the most trouble is P(A|A). But I would prefer to call that a syntactic elimination rule for probabilistic reasoning, or perhaps a set equality between events, rather than claiming that there's some specific proposition that has "Probability 1".

and then

Huh, I must be slowed down because it's late at night... P(A|A) is the simplest case of all. P(x|y) is defined as P(x,y)/P(y). P(A|A) is defined as P(A,A)/P(A) = P(A)/P(A) = 1. The ratio of these two probabilities may be 1, but I deny that there's any actual probability that's equal to 1. P(|) is a mere notational convenience, nothing more. Just because we conventionally write this ratio using a "P" symbol doesn't make it a probability.

Comment author: kmccarty 17 December 2010 05:45:26PM 4 points [-]

Ah, thanks for the pointer. Someone's tried to answer the question about the reliability of Bayes' Theorem itself too I see. But I'm afraid I'm going to have to pass on this, because I don't see how calling something a syntactic elimination rule instead a law of logic saves you from incoherence.

View more: Next