ABrooks comments on Rationality Quotes April 2012 - Less Wrong

4 Post author: Oscar_Cunningham 03 April 2012 12:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (858)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 April 2012 12:32:26AM 0 points [-]

Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you've already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post.

And that assertion has, thus far, gone undefended.

Comment author: TheOtherDave 08 April 2012 01:08:13AM *  0 points [-]

Well, I justify it by virtue of believing that my brain isn't some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest.

How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?

Comment author: [deleted] 08 April 2012 01:26:40AM *  0 points [-]

Well, your brain isn't that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.

Comment author: TheOtherDave 08 April 2012 02:27:00AM 0 points [-]

Sorry, I didn't follow that at all.

Comment author: [deleted] 08 April 2012 03:17:21AM *  0 points [-]

The source of your doubt seemed to be that you didn't think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?

Comment author: TheOtherDave 08 April 2012 04:02:56AM 0 points [-]

Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I'm glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.

Comment author: [deleted] 08 April 2012 04:20:23PM *  0 points [-]

So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn't to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.

I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we're foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn't a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we're trying to understand. If we don't assume this, and to whatever extent we don't assume this, just to that extent we can't recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don't have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can't decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can't decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.

This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it's here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn't that there aren't any such thoughts, only that we could never be given reason for thinking that there are.

ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn't an interesting case, and it's fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he's about to kill me. He isn't making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.

Comment author: TheOtherDave 08 April 2012 05:11:06PM 0 points [-]

Re: your ETA... agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.

But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.

I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are "fundamentally remediable". Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.

I'm enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.

Comment author: TheOtherDave 08 April 2012 04:56:55PM 0 points [-]

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).

I've asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?

If so, I don't think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

I agree that if I'm wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.

I see no reason to believe that, though.

===

  • Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you're referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don't believe English is either.)
Comment author: [deleted] 08 April 2012 05:11:08PM *  0 points [-]

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

Well, I'd like a little more from you: I'd like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn't go so far as suggesting the latter of the two claims.

So do you think you can come up with such an example? If not, don't you think that counts powerfully against your reasons for thinking that such a situation is possible?

I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)

It's extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.

Comment author: TheOtherDave 08 April 2012 05:25:19PM *  0 points [-]

From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn't prove I'm incapable of it, it just means that I haven't yet succeeded.

But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.

If you're simply asserting that that prior probability can't ever reach zero, I agree completely.

If you're asserting that that prior probability can't in practice ever reach epsilon, I mostly agree.

If you're asserting that that prior probability can't in practice get lower than, say, .01, I disagree.

(ETA: In case this isn't clear, I mean here to propose "I repeatedly try to understand in detail the thought underlying A1 and A2's cooperation and I repeatedly fail" as an example of a reason to think that the thought in question is not one I can think.)