I can't follow this. If "Tuesday exists" isn't indexical, then it's exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.
Well, in my toy model of the Doomsday Argument, there's only a 1/2 chance that Tuesday exists, and the only way that a person can know that Tuesday exists is to be alive on Tuesday. Do you still think there's a problem?
Indeed, unless you work within the confines of a finite toy model.
Even in toy models like Sleeping Beauty we have to somehow choose between SSA and SIA (which are precisely two rival methods for deriving centered from uncentered distributions.)
What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn't that the direction scientific method works in?
That's a very good, philosophically deep question! Like many lesswrongers, I'm what David Chalmers would call a "Type-A materialist" which means that I deny the existence of "subjective facts" which aren't in some way reducible to objective facts.
Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A "centered world" really just means an "uncentered world that happens to contain an ontologically basic, causally inert 'pointer' towards some being and an ontologically basic, causally inert catalogue of its "mental facts". However, because a "center" is causally inert, we can never acquire any evidence that the world has a "center".
(I'd like to say more but really this needs a lot more thought and I can see I'm already starting to ramble...)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think the temptation is very strong to notice the distinction between the elemental nature of raw sensory inputs and the cognitive significance they are the bearers of. And this is so, and is useful to do, precisely to the extent that the cognitive significance will vary depending on context and background knowledge, such as light levels, perspective, etc. because those serve as dynamically updated calibrations of cognitive significance. But these calibrations become transparent with use, so that we see, hear and feel vividly and directly in three dimensions because we have learned that that is the cognitive significance of what we see, hear, feel and navigate through. Subjective experience comes cooked and raw in the same dish. It then takes an analytic effort of abstraction of a painter's eye to notice that it takes an elliptical shape on a focal plane to induce the visual experience of a round coin on a tabletop. Thus ambiguities, ambivalences and confusions abound about what constitutes the contents of subjective experience.
I'm reminded of an experiment I read about quite some time ago in a very old Scientific American I think, in which (IIRC) psychology subjects were fitted with goggles containing prisms that flipped their visual fields upside down. They wore them for upwards of a month during all waking hours. When they first put them on, they could barely walk at all without collapsing in a heap because of the severe navigational difficulties. After some time, the visual motor circuits in their brains adapted, and some were even able to re-learn how to ride a bike with the goggles on. After they could navigate their world more or less normally, they were asked whether at anytime their visual field ever "flipped over" so that things started looking "right side up" again. No, there was no change, things looked the same as when they first put the goggles on. So then things still looked "upside down"? After a while, the subjects started insisting that the question made no sense, and they didn't know how to answer it. Nothing changed about their visual fields, they just got used to it and could successfully navigate in it; the effect became transparent.
(Until they took the goggles off after the experiment ended. And then they were again seriously disoriented for a time, though they recovered quickly.)