In response to Causal Reference
Comment author: Armok_GoB 21 October 2012 04:25:36PM 14 points [-]

Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?

In response to comment by Armok_GoB on Causal Reference
Comment author: afeller08 25 October 2012 01:09:40AM *  -1 points [-]

That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler's built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell's teapot, astrology, or the invisible pink unicorn in our world.)

Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains -- then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren't really shadow brains because they are affecting the world (with probability 1).

Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn't have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher's postulating shadow brains correctly for no reason only because we don't necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)

In response to Causal Reference
Comment author: afeller08 25 October 2012 12:52:24AM 1 point [-]

Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes.

Maybe I'm misunderstanding something. I've always supposed that we do live in a multi-tiered causal universe. It seems to me that the "laws of physics" are a first tier which affect everything in the second tier (the tier with all of the matter including us), but that there's nothing we can do here in the matter tier to affect the laws of physics. I've also always assumed that this was how practically everyone who uses the phrase 'laws of physics' uses it.

(I realize you were talking about lower tiers in the line that I quoted, and I certainly agree with the arguments and conclusions you made regarding lower tiers. I just found the remark surprising because I place a very high probability on the belief that we are living in a multi-tier causal universe, and I think that that assignment is highly consistent with everything you said.)

I don't know if I'm nitpicking or if I missed a subtlety somewhere. Either way, I found the rest of this article and this sequence persuasive and insightful. My previous definition of "'whether X is true' is meaningful" was "There is something I might desire to accomplish that I would approach differently if X were true than if X were false," and my justification for it was "Anything distinguishably true or false which my definition omits doesn't matter to me." Your definition and justification seem much more sound.

Comment author: afeller08 24 October 2012 11:04:16PM 0 points [-]

Given that I spend a lot of time programming computers and that I occasionally brainstorm my programs through flow-charts, I was shocked, upon realizing that flow-charts can easily be formalized as something Turing complete, by how long it took me to realize this. (Generalized: If I am able to regularly use a particular abstraction as a proxy for another abstraction, it makes sense to ask the question, "Are these two ideas equivalent?")

Comment author: alex_zag_al 11 October 2012 02:57:32PM 1 point [-]

Doesn't seem to me like the first "believe" you append implies a different meta level, just a different reason for believing. After all, the one who asserts "God exists" also believes God exists.

Or, maybe the way you've set it out, "I believe that God exists" is belief in belief, in which case in the next one, the extra "I believe" just indicates uncertainty.

I think that the general trend that you observed, that you tend to get more meta as you add more "I believes", may be making you miss when the words "I believe" add nothing, or just mean "probably".

Comment author: afeller08 12 October 2012 12:26:26PM *  4 points [-]

I agree with Xachariah's view of semantics. I think that the first 'I believe' does imply a different meta level of belief (often associated with a different reason for believing). His example does a good job of showing how someone can drill down many levels, but the distinction in the first level might be made more clear by considering a more concretely defined belief:

"We're lost" -- "I'm you're jungle leader, and I don't have a clue where we are any more."

"I believe we're lost" -- "I'm not leading this expedition. I didn't expect to have a clue where we were going, but it doesn't seem to me like anyone else knows where we are going either."

--

"Sarah won state science fair her senior year of high school" -- "I attended the fair and witnessed her win it."

"I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and she's the best experimentalist I've ever met."

"I believe that I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and I don't believe for one second that she'd make that sort of thing up. That said, she's not, so far as I can tell, particularly good at science, and it shocks me that she might somehow have been able to win."

--

"Parachuting isn't all it's cracked up to be." -- "I've gone parachuting, and frankly, I've gotten bigger adrenaline rushes playing poker."

"I don't believe parachuting's all it's cracked up to be." -- "I haven't gone parachuting. There's no way I would spend $600 for a 4 minute experience when I can't imagine that it's enough fun to justify that."


Without the 'I believe,' what I tend to be saying is, I trust the map because I drew it and I drew it carefully. With the 'I believe,' I tend to be saying I trust this map because I trust it's source even though I didn't actually create it myself. In the case of the parachuting, I don't know where the map comes from, it's just the one I have.

Placing additional "I believe"s in front of a statement changes what part of the statement you have confidence in.

The statement 'I believe God exists' usually does mean that someone places confidence in eir community's ability to determine if God exists or not rather than placing confidence in the statement itself. Most of the religious people I know would say 'God exists' rather than 'I believe God exists' and most of them believe that they have directly experienced God in some way. However, most of them would say 'I believe the Bible is true' rather than 'the Bible is true' -- and when pressed for why they believe that, they tend to say something along the lines of "I cannot believe that God would allow his people to be generally wrong about something that important" or something else that asserts that their confidence is in their community's ability to determine that 'the Bible is true' rather than their confidence being in the Bible itself. I don't know if this is a very localized phenomenon or not since all of the people I've had this conversation with belong to the same community. It's how I would tend to use the word 'believe' too, but I grew up in this community, so I probably tend to use a lot of words the same way as the people in this community do.

In Xachariah's example the certainty/uncertainty is being placed on the definition of 'believe' at each step past the first one, so the way that the the statement is changing is significantly different in the second and third application of 'I believe' than it is in the first. The science fair example applies the 'I believe' pretty much the same way twice.

When I say "Sarah won science fair," I'm claiming that all of the uncertainty lies in my ability to measure and accurately record the event. Her older sister is really good at science too; it's possible that I'm getting the two confused but I very strongly remember it being Sarah who won. On the other hand, I'm extremely confident that I wouldn't give myself the wrong map intentionally -- I have no reason to want to convince myself that Sarah is better at science than she actually is.

That source of uncertainty essentially vanishes when the source of my information becomes Sarah herself. I now have a new source of uncertainty though because she does have a reason to convince me that she is better at science than she actually is. However, I trust the map because it agrees with what I'd expect it to be. I'd still think she was telling the truth about this if she lied to me about other things.

In the third case, I'm once again extremely confident that Sarah won science fair. She told me she did, and she tells the truth. What she's told me does not at all agree with my expectations; I don't really place confidence in the map, I place confidence a great deal of confidence in Sarah's ability to create an accurate map, and I place a great deal of confidence in her having given me an accurate map. The map seems preposterous to me, but I still think it's accurate, so when someone asks me if I believe that Sarah won science fair, I wince and I say "I believe that I believe that Sarah won science fair" and everyone knows what I mean. My statement isn't really "Sarah won science fair." It's "Sarah doesn't lie. Sarah says she won science fair. Therefore, Sarah won science fair." If I later find out that Sarah isn't quite as honest as I think she is, this is the first thing she's told me that I'll stop believing. Unless that happens, I'll continue to believe that she won.

View more: Prev