ABrooks comments on Rationality Quotes April 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (858)
Well, your brain isn't that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.
Sorry, I didn't follow that at all.
The source of your doubt seemed to be that you didn't think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?
Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I'm glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.
So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn't to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.
I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we're foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn't a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we're trying to understand. If we don't assume this, and to whatever extent we don't assume this, just to that extent we can't recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don't have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can't decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can't decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.
This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it's here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn't that there aren't any such thoughts, only that we could never be given reason for thinking that there are.
ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn't an interesting case, and it's fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he's about to kill me. He isn't making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.
Re: your ETA... agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.
But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.
I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are "fundamentally remediable". Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.
I'm enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.
Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.
I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).
I've asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?
If so, I don't think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.
I agree that if I'm wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.
I see no reason to believe that, though.
===
Well, I'd like a little more from you: I'd like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn't go so far as suggesting the latter of the two claims.
So do you think you can come up with such an example? If not, don't you think that counts powerfully against your reasons for thinking that such a situation is possible?
This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)
It's extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.
From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn't prove I'm incapable of it, it just means that I haven't yet succeeded.
But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.
If you're simply asserting that that prior probability can't ever reach zero, I agree completely.
If you're asserting that that prior probability can't in practice ever reach epsilon, I mostly agree.
If you're asserting that that prior probability can't in practice get lower than, say, .01, I disagree.
(ETA: In case this isn't clear, I mean here to propose "I repeatedly try to understand in detail the thought underlying A1 and A2's cooperation and I repeatedly fail" as an example of a reason to think that the thought in question is not one I can think.)
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A's were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave's opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I'm not sure how we would determine experimentally that they were true, though. I wouldn't normally care, but you made such a point a moment ago about the importance of your claim being about what's knowable rather than about what's true that I'm not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
Then I suppose we can safely ignore it for now.
As I've already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I'm incapable of doing so.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
I'm willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of "world", "largely", and "relevant". Before I lean too heavily on any of that I'd want to clarify those words further, but I'm not sure it actually matters.
I don't agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me "B is true," I have reason to think B is true but I don't know the content of B.
The premise is false, but I agree that were it true your conclusion would follow.