Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I think the temptation is very strong to notice the distinction between the elemental nature of raw sensory inputs and the cognitive significance they are the bearers of. And this is so, and is useful to do, precisely to the extent that the cognitive significance will vary depending on context and background knowledge, such as light levels, perspective, etc. because those serve as dynamically updated calibrations of cognitive significance. But these calibrations become transparent with use, so that we see, hear and feel vividly and directly in three dimensions because we have learned that that is the cognitive significance of what we see, hear, feel and navigate through. Subjective experience comes cooked and raw in the same dish. It then takes an analytic effort of abstraction of a painter's eye to notice that it takes an elliptical shape on a focal plane to induce the visual experience of a round coin on a tabletop. Thus ambiguities, ambivalences and confusions abound about what constitutes the contents of subjective experience.

I'm reminded of an experiment I read about quite some time ago in a very old Scientific American I think, in which (IIRC) psychology subjects were fitted with goggles containing prisms that flipped their visual fields upside down. They wore them for upwards of a month during all waking hours. When they first put them on, they could barely walk at all without collapsing in a heap because of the severe navigational difficulties. After some time, the visual motor circuits in their brains adapted, and some were even able to re-learn how to ride a bike with the goggles on. After they could navigate their world more or less normally, they were asked whether at anytime their visual field ever "flipped over" so that things started looking "right side up" again. No, there was no change, things looked the same as when they first put the goggles on. So then things still looked "upside down"? After a while, the subjects started insisting that the question made no sense, and they didn't know how to answer it. Nothing changed about their visual fields, they just got used to it and could successfully navigate in it; the effect became transparent.

(Until they took the goggles off after the experiment ended. And then they were again seriously disoriented for a time, though they recovered quickly.)

I'm what David Chalmers would call a "Type-A materialist" which means that I deny the existence of "subjective facts" which aren't in some way reducible to objective facts.

The concerns Chalmers wrote about focused on the nature of phenomenal experience, and the traditional dichotomy between subjective and objective in human experience. That distinction draws a dividing line way off to the side of what I'm interested in. My main concern isn't with ineffable consciousness, it's with cognitive processing of information, information defined as that which distinguishes possibilities, reduces uncertainty and can have behavioral consequences. Consequences for what/whom? Situated epistemic agents, which I take as ubiquituous constituents of the world around us, and not just sentient life-forms like ourselves. Situated agents that process information don't need to be very high on the computational hierarchy in order to be able to interact with the world as it is, use representations of the world as they take it to be, and entertain possibilities about how well their representations conform to what they are intended to represent. The old 128MB 286 I had in the corner that was too underpowered to run even a current version of linux, was powerful enough to implement an instantiation of a situated Bayesian agent. I'm completely fine with stipulating that it had about as much phenomenal or subjective experience as a chunk of pavement. But I think there are useful distinctions totally missed by Chalmers' division (which I'm sure he's aware of, but not concerned with in the paper you cite), between what you might call objective facts and what you might call "subjective facts", if by the latter you include essentially indexical and contextual information, such as de se and de dicto information, as well as de re propositions.

Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A "centered world" really just means an "uncentered world that happens to contain an ontologically basic, causally inert 'pointer' towards some being and an ontologically basic, causally inert catalogue of its "mental facts". However, because a "center" is causally inert, we can never acquire any evidence that the world has a "center".

(On Lewis's account, centered worlds are generalizations of uncentered ones, which are contained in them as special cases.) From the point of view of a situated agent, centered worlds are epistemologically prior, about as patently obvious as the existence of "True", "False" and "Don't Know", and the uncentered worlds are secondary, synthesized, hypothesized and inferred. The process of converting limited indexical information into objective, universally valid knowledge is where all the interesting stuff happens. That's what the very idea of "calibration" is about. To know whether they (centered worlds or the other kind) are ontologically prior it's just too soon for me to tell, but I feel uncomfortable prejudging the issue on such strict criteria without a more detailed exploration of the territory on the outside of the walled garden of God's Own Library of Eternal Verity. In other words, with respect to that wall, I don't see warrant flowing from inside out, I see it flowing from outside in. I suppose that's in danger of making me an idealist, but I'm trying to be a good empiricist.

The Bayesian calculation only needs to use the event "Tuesday exists"

I can't follow this. If "Tuesday exists" isn't indexical, then it's exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.

there doesn't seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones.

Indeed, unless you work within the confines of a finite toy model. But why go in that direction? What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn't that the direction scientific method works in?

I suppose I'm being obtuse about this, but please help me find my way through this argument.

  1. The event "it is Monday today" is indexical. However, an "indexical event" isn't strictly speaking an event. (Because an event picks out a set of possible worlds, whereas an indexical event picks out a set of possible "centered worlds".) Since it isn't an event, it makes no sense to treat it as 'data' in a Bayesian calculation.

Isn't this argument confounded by the observation that an indexical event "It is Tuesday today", in the process of ruling out several centered possible worlds--the ones occurring on Monday--also happens to rule out an entire uncentered world? If it's not an event, how does it makes sense to treat it as data in a Bayesian calculation that rules out Heads? If that wasn't the event that entered into the Bayesian calculation, what was?

On further reflection, both Ancestor and each Descendant can consider the proposition P(X) = "X is a descendant & X is a lottery winner". Given the setup, Ancestor can quantify over X, and assign probability 1/N to each instance. That's how the statement {"I" will win the lottery with probability 1} is to be read, in conjunction with a particular analysis of personal identity that warrants it. This would be the same proposition each descendant considers, and also assigns probability 1/N to. On this way of looking at it, both Ancestor and each descendant are in the same epistemic state, with respect to the question of who will win the lottery.

Ok, so far so good. This same way of looking at things, and the prediction about probability of descendants, is a way of looking at the Sleeping Beauty problem I tried to explain some months ago, and from what I can see is an argument for why Beauty is able to assert on Sunday evening what the credence of her future selves should be upon awakening (which is different from her own credence on Sunday evening), and therefore has no reason to change it when she later awakens on various occasions. It didn't seem to get much traction then, probably because it was also mixed in with arguments about expected frequencies.

There need be no information transferred.

I didn't quite follow this. From where to where?

But anyway, yes, that's correct that the referents of the two claims aren't the same. This could stand some further clarification as to why. In fact, Descendant's claim makes a direct reference to the individual who uttered it at the moment it's uttered, but Ancestor's claim is not about himself in the same way. As you say, he's attempting to refer to all of his descendants, and on that basis claim identity with whichever particular one of them happens to win the lottery, or not, as the case may be. (As I note above, this is not your usual equivalence relation.) This is an opaque context, and Ancestor's claim fails to refer to a particular individual (and not just because that individual exists only in the future). He can only make a conditional statement: given that X is whoever it is will win the lottery (or not), the probability that person will win the lottery (or not) is trivial. He lacks something that allows him to refer to Descendant outside the scope of the quantifier. Descendant does not lack this, he has what Ancestor did not have-- the wherewithal to refer to himself as a definite individual, because he is that individual at the time of the reference.

But a puzzle remains. On this account, Ancestor has no credence that Descendant will win the lottery, because he doesn't have the means to correctly formulate the proposition in which he is to assert a credence, except from inside the scope of a universal quantifier. Descendant does have the means, can formulate the proposition (a de se proposition), and can now assert a credence in it based on his understanding of his situation with respect to the facts he knows. And the puzzle is, Descendant's epistemic state is certainly different from Ancestor's, but it seems it didn't happen through Bayesian updating. Meanwhile, there is an event that Descendant witnessed that served to narrow the set of possible worlds he situates himself in (namely, that he is now numerically distinct from any of the other descendants), but, so the argument goes, this doesn't count as any kind of evidence of anything. It seems to me the basis for requiring diachronic consistency is in trouble.

I don't think personal identity is a mathematical equivalence relation. Specifically, it's not symmetric: "I'm the same person you met yesterday" actually needs to read "I was the same person you met yesterday"; "I will be the same person tomorrow" is a prediction that may fail (even assuming I survive that long). This yields failures of transitivity: "Y is the same person as X" and "Z is the same person as X" doesn't get you "Y is the same person as Z".

Given that you know there will be a future stage of you that will win the lottery how can that copy (the copy that is the future stage of you that has won the lottery) be surprised?

It's not the ancestor--he who is certain to have a descendant that wins the lottery--who wins the lottery, it's that one descendant of him who wins it, and not his other one(s). Once a descendant realizes he is just one of the many copies, he then becomes uncertain whether he is the one who will win the lottery, so will be surprised when he learns whether he is. I think the interesting questions here are

1) Consider the epistemic state of the ancestor. He believes he is certain to win the lottery. There is an argument that he's justified in believing this.

2) Now consider the epistemic state of a descendant, immediately after discovering that he is one of several duplicates, but before he learns anything about which one. There is some sense in which his (the descendant's) uncertainty about whether he (the descendant) will win the lottery has changed from what it was in 1). Aside: in a Bayesian framework, this means having received some information, some evidence on which to update. But the only plausible candidate in sight is the knowledge that he is now just one particular one of the duplicates, not the ancestor anymore (e.g., because he has just awoken from the procedure). But of course, he knew that was going to happen with certainty before, so some deny that he learns anything at all. This seems directly analogous to Sleeping Beauty's predicament.

3) Descendant now learns whether he's the one who's won the lottery. Descendant could not have claimed that with certainty before, so he definitely does receive new information, and updates accordingly (all of them do). There is some sense in which the information received at this point exactly cancels out the information(?) in 2).

A couple points:

Of course, Bayesians can't revise certain knowledge, so the standard analysis gets stuck on square 1. But I don't see that the story changes in any significant way if we substitute "reasonable certainty(epsilon)" throughout, so I'm happy to stipulate if necessary.

Bayesians have a problem with de se information: "I am here now". The standard framework on which Bayes' Theorem holds deals with de re information. De se and de dicto statements have to be converted into de re statements before they can be processed as evidence. This has to be done via various calibrations that adequately disambiguate possibilities and interpret contexts and occasions: who am I, what time is it, and where am I. This process is often taken for granted, because it usually happens transparently and without error. Except when it doesn't.

I may need to be providing a more extensive philosophical context about personal identity for this to make sense, I'm not sure.

I hope you do.

Did I accuse someone of being incoherent? I didn't mean to do that, I only meant to accuse myself of not being able to follow the distinction between a rule of logic (oh, take the Rule of Detachment for instance) and a syntactic elimination rule. In virtue of what do the latter escape the quantum of sceptical doubt that we should apply to other tautologies? I think there clearly is a distinction between believing a rule of logic is reliable for a particular domain, and knowing with the same confidence that a particular instance of its application has been correctly executed. But I can't tell from the discussion if that's what's at play here, or if it is, whether it's being deployed in a manner careful enough to avoid incoherence. I just can't tell yet. For instance,

Conditioning on this tiny credence would produce various null implications in my reasoning process, which end up being discarded as incoherent

I don't know what this amounts to without following a more detailed example.

It all seems to be somewhat vaguely along the lines of what Hartry Field says in his Locke lectures about rational revisability of the rules of logic and/or epistemic principles; his arguments are much more detailed, but I confess I have difficulty following him too.

Ah, thanks for the pointer. Someone's tried to answer the question about the reliability of Bayes' Theorem itself too I see. But I'm afraid I'm going to have to pass on this, because I don't see how calling something a syntactic elimination rule instead a law of logic saves you from incoherence.

Load More