Comment author: Eliezer_Yudkowsky 10 October 2012 05:50:38AM 2 points [-]

Koan 2:

"Does your rule there forbid epiphenomenalist theories of consciousness - that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness has always been that we can imagine a universe in which all the atoms are in the same place and people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. The usual effect of the brain generating consciousness is missing, but consciousness doesn't cause anything else in turn - it's just a passive awareness - and so from the outside the universe looks the same. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

How would you reply?

Comment author: GDC3 13 October 2012 05:29:01PM 1 point [-]

It doesn't rule it out. Unless you're directly observing those epiphenominal nodes, Occam's razor heavily decreases the likelihood of such models though, because they make the same predictions with more nodes.

Comment author: Benja 05 October 2012 03:36:19PM *  8 points [-]

Maaay-be, but I'm not at all convinced this is the right way to think about this. Suppose that we can become consistent, and suppose that there is some program P that could in principle be written down that simulates a humanity that has become consistent, and human mathematicians are going through all possible theorems in lexicographical order, and someone is writing down in a simple format every theorem humanity has found to be true -- so simple a format that given the output of program P, you can easily find in it the list of theorems found so far. Thus, there is a relatively simple recursive enumeration of all theorems this version of humanity will ever find -- just run P and output a theorem as soon as it appears on the list.

Now suppose humanity does figure out that it's living in a simulation, and figures out the source code of P. Then it knows its own Gödel sentence. Will this lead to a logical contradiction?

Well, if humanity does write down "our Gödel sentence is true" on the List, then that sentence will be false (because it says something equivalent to "this sentence does not appear on the List"). That actually doesn't make humanity inconsistent, but it makes it wrong (unsound), which we'll take to be just as bad (and we'll get inconsistent if we also write down all logical consequences of this statement + the Peano axioms). So what if humanity does not write it down on the list? Then the statement is true, and it's obvious to us that it's true. Not a logical contradiction -- but it does mean that humanity recognizes some things as true it isn't writing down on its list.

Paul Christiano has written some posts related to this (which he's kindly pointed out to me when I was talking about some related things with him :-)). One way I think about this is by the following thought experiment: Suppose you're taking part in an experiment supposedly testing your ability to understand simple electric circuits. You're put in a room with two buttons, red and green, and above them two LEDs, red and green. When you press one of the buttons, one of the lights will light up; your task is to predict which light will light up and press the button of the same color. The circuit is wired as follows: Pressing the red button makes the green light light up; pressing the green button makes the red light light up. I maintain that you can be able to understand perfectly how the mechanism works, and yet not be able to "prove" this by completing the task.

So, I'd say that "what humanity wrote on its list" isn't a correct formalization of "what humanity understands to be true". Now, it does seem possible that there still is a formal predicate that captures the meaning of "humanity understands proposition A to be true". If so, I'd expect the problem with recognizing it not to be that it talks about something like large ordinals, but that it talks about the individual neurons in every human mind, and is far too long for humans to process sensibly. (We might be able to recognize that it says lots of things about human neurons, but not that the exact thing it says is "proposition A will at some point be understood by humanity to be true".) If you try to include "human" minds scaled to arbitrarily large sizes (like 3^^^3 and so on), so that any finite-length statement can be comprehended by some large-enough mind, my guess would be that there is no finite-length predicate that can accurately reflect "mind M understands proposition A to be true" for arbitrarily large "human" minds M, though I won't claim strong confidence in this guess.

Comment author: GDC3 07 October 2012 08:41:31PM 0 points [-]

You shouldn't include things we know only by experience as part of our theoretical system, for the purpose of "the human Godel sentence." At best learning a theorem from experience would add an axiom, but then our Godel sentence changes. So if we knew our Godel sentence it would become something else.

Comment author: Bruno_Coelho 07 October 2012 01:53:25PM 4 points [-]

With close friends this works, saying "I believe X" signals uncertains where someone could help with avaliable information. But in public debates if you say "I believe X" instead of "X", people will find more confidente and secure.

Comment author: GDC3 07 October 2012 07:14:20PM 8 points [-]

You're right. I think the lesson we should take from all this complexity is to remember that the wording of a sentence is relevant to more than just it's truth conditions. Language does a lot more than state facts and ask questions.

Comment author: [deleted] 07 October 2012 09:01:33AM 0 points [-]

I guess its a question of word usage whether the projective meaning of "blue" which is something like "looks blue under good lighting conditions" should still be applied when it's not caused by reflectance.

What would you call a glass absorbing red/orange/yellow light and letting the rest through?

Comment author: GDC3 07 October 2012 07:11:34PM 4 points [-]

As I understand it, the sky does let red-yellow light through. It scatters blue light and lets red light through relatively unchanged. So it looks red-yellow near the light source and blue everywhere else.

Comment author: CronoDAS 06 October 2012 08:23:20PM 6 points [-]

That's a better explanation than I could come up with.

On a completely irrelevant note, why is "the sky is blue" the standard for "obviously true fact"? The sky is black about half the time, and it's pretty common for it to be white, too.

Comment author: GDC3 07 October 2012 05:13:54AM 1 point [-]

When the sky is white, it's not the sky; it's clouds blocking the sky. When the sky is black it's just too dark to see the sky. At least that was my intuition before I knew that the sky wasn't some conventionally blue object. I guess its a question of word usage whether the projective meaning of "blue" which is something like "looks blue under good lighting conditions" should still be applied when it's not caused by reflectance. Though it's not blue from all directions is it?

Comment author: CronoDAS 06 October 2012 03:24:18PM *  21 points [-]

Saying "I believe X" does seem to have different connotations than simply stating X; I'd be more likely to say "I believe X" when X is controversial, for example.

Comment author: GDC3 06 October 2012 06:52:28PM 21 points [-]

Specifically they're different because of the pragmatic conversation rule that direct statements should be something your conversation partner will accept, in most normal conversations. You say "X" when you expect your conversation partner to say something like "oh cool, I didn't know that." You say "I believe X" when they may disagree and your arguments will come later or not at all. "It's true that X" is more complicated; one example of use would be after the proposition X has already come up in conversation as a belief and you want to state it as a fact.

A: "I hear that lots of people are saying the sky is blue." B: "The sky is blue."

The above sounds weird. (Unless you are imagining it with emphasis on "is" which is another way to put emphasis on the truth of the proposition.) "The sky is blue" is being stated without signaling its relationship to the previous conversation so it sounds like new information; A will expect some new proposition and be briefly confused; it sounds like echolalia rather than an answer.

B: "The sky really is blue.

or

B: "It's actually true that the sky is blue."

sounds better in this context.

Comment author: VKS 05 September 2012 07:12:06AM *  5 points [-]

The view, I think, is that anything you can prove immediately off the top of your head is trivial. No matter how much you have to know. So, sometimes you get conditional trivialities, like "this is trivial if you know this and that, but I don't know how to get this and that from somesuch...".

Comment author: GDC3 02 October 2012 06:08:27AM 3 points [-]

Relatedly, a mathematician friend said that he uses "obvious" to mean "there exists a very short proof of it." He has been sometimes known to say things like "I think this is obvious but I'm not sure why yet."

Comment author: GDC3 15 April 2012 09:31:29PM 2 points [-]

There's a gap in the proof that X and Y cooperate. You may know how to close it, but if it's possible it's not obvious enough so the extra steps should be added to the article. More importantly, if it can't be closed the theorem might not be true.

The gap: We hypothesize that statement S is provable in (system of X). Therefore X will Cooperate. This guarantees that T is true, by definition, but not that Y will prove that T is true. Presumable Y can recreate the proof of S being true, but it cannot conclude that X will cooperate unless it also can prove that X will prove it.

I cannot see how to resolve this without stepping out of a specific formal system, which would make Lob's Theorem unusable.

Am I missing something?

Comment author: pleeppleep 10 April 2012 12:29:15AM 1 point [-]

I meant that the "me" in a different universe is different from me in this one. The distance between universes is not trivial. I might never notice the difference between a million "me"s and a billion, but the overall number of "me"s is significant. If multiple versions of myself live side by side, and one dies, then that one does not really continue living, unless it i replaced. Does that make sense? Its not very easy to word ideas regarding this topic.

Comment author: GDC3 10 April 2012 05:38:26PM 0 points [-]

I suppose you mean they have different positions. But if indistinguishable particles in quantum mechanics can freely switch places with each other whenever, and which is which has no meaning, then what argument do you have that the universe can even keep different versions of you apart itself?

Not very formal, but I'm trying to convey the idea that certain facts that seem important have no actual meaning in the ontology of quantum physics.

Comment author: pleeppleep 06 April 2012 12:00:32AM 2 points [-]

That's all true, but I think the fact that I can't see more than one of their points of view is enough to distinguish between them. The only difference I can think of that pertains to a reductionist universe is location. I know the "inner listener" to be an illusion, but still can't shake the fact that if I died in this universe, but not in another one, then I wouldn't experience "waking up". Its possible that I'm being irrational and misinterpreting the way the brain works, but I think it is clearly observable that I don't feel that I'm experiencing more than one universe, and this feeling is the thing that I care most to preserve.

Comment author: GDC3 09 April 2012 10:02:09PM 3 points [-]

I think you're missing the part where "their points of view" are exactly the same. What would it mean to see more than one of them when they're exactly the same. Are you picturing them lined up next to each other in your field of view so you can count them?

Similarly there is no "I just definitely died" feeling that we know of. (How would we know?) You shouldn't picture "dying and then waking up in another universe." You should picture "I experience passing out knowing I may die, but that there is a least one of me that probably doesn't. So when I wake up it will turn out that I was one of them."

Does this make more sense? I think the barrier to intuition is in just how indistinguishable," indistinguishable" is. You can be a billion exact copies and you'll never notice, because they're exact.

View more: Prev | Next