ABrooks comments on Rationality Quotes April 2012 - Less Wrong

4 Post author: Oscar_Cunningham 03 April 2012 12:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (858)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 April 2012 03:17:21AM *  0 points [-]

The source of your doubt seemed to be that you didn't think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?

Comment author: TheOtherDave 08 April 2012 04:02:56AM 0 points [-]

Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I'm glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.

Comment author: [deleted] 08 April 2012 04:20:23PM *  0 points [-]

So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn't to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.

I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we're foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn't a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we're trying to understand. If we don't assume this, and to whatever extent we don't assume this, just to that extent we can't recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don't have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can't decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can't decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.

This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it's here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn't that there aren't any such thoughts, only that we could never be given reason for thinking that there are.

ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn't an interesting case, and it's fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he's about to kill me. He isn't making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.

Comment author: TheOtherDave 08 April 2012 05:11:06PM 0 points [-]

Re: your ETA... agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.

But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.

I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are "fundamentally remediable". Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.

I'm enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.

Comment author: TheOtherDave 08 April 2012 04:56:55PM 0 points [-]

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).

I've asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?

If so, I don't think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

I agree that if I'm wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.

I see no reason to believe that, though.

===

  • Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you're referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don't believe English is either.)
Comment author: [deleted] 08 April 2012 05:11:08PM *  0 points [-]

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

Well, I'd like a little more from you: I'd like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn't go so far as suggesting the latter of the two claims.

So do you think you can come up with such an example? If not, don't you think that counts powerfully against your reasons for thinking that such a situation is possible?

I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)

It's extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.

Comment author: TheOtherDave 08 April 2012 05:25:19PM *  0 points [-]

From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn't prove I'm incapable of it, it just means that I haven't yet succeeded.

But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.

If you're simply asserting that that prior probability can't ever reach zero, I agree completely.

If you're asserting that that prior probability can't in practice ever reach epsilon, I mostly agree.

If you're asserting that that prior probability can't in practice get lower than, say, .01, I disagree.

(ETA: In case this isn't clear, I mean here to propose "I repeatedly try to understand in detail the thought underlying A1 and A2's cooperation and I repeatedly fail" as an example of a reason to think that the thought in question is not one I can think.)

Comment author: [deleted] 08 April 2012 07:18:25PM 0 points [-]

From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable.

I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A's were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.

That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.

So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.

If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.

And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave's opinion that the aliens are thinking is irrational, even if it is true.

Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.

Comment author: TheOtherDave 08 April 2012 07:58:14PM 0 points [-]

suppose Dave were a propositional logic machine, and the A's were first order logic machines. [..] (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.

Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.

I'm not sure how we would determine experimentally that they were true, though. I wouldn't normally care, but you made such a point a moment ago about the importance of your claim being about what's knowable rather than about what's true that I'm not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.

That, again, is not my point.

Then I suppose we can safely ignore it for now.

Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do.

As I've already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I'm incapable of doing so.

So suppose Dave has understood that the aliens are thinking.

Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?

By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.

I'm willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of "world", "largely", and "relevant". Before I lean too heavily on any of that I'd want to clarify those words further, but I'm not sure it actually matters.

If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action

I don't agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me "B is true," I have reason to think B is true but I don't know the content of B.

then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.

The premise is false, but I agree that were it true your conclusion would follow.

Comment author: [deleted] 08 April 2012 08:48:01PM *  0 points [-]

I have reason to think B is true but I don't know the content of B.

This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.

So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn't so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you've got a lot of beliefs about what B is, without knowing the specifics.

Essentially, your inference that B is true because Sam says that it is, is the belief that though you don't know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.

In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.

(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I'm saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can't have the assumptions necessary to set up your examples.)

Comment author: TheOtherDave 08 April 2012 09:27:04PM *  0 points [-]

This seems to be a crucial disagreement, so we should settle it first.

All right.

you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs

Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.

Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don't interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible... I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I'm willing to go along with it for now.

Agreed so far.

you think Sam makes judgements roughly in the same ways you do.

No, I don't follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I'm not sure this matters to your argument.

So, you mostly understand the kinds of inferences Sam draws

Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.

you mostly understand the beliefs that Sam has

Yes, in the same ways.

If you infer from this that B is true because Sam says that it is, you must be assuming that B isn't so odd belief that Sam has no competence in assessing it.

Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I'm confident it isn't true.

you've got a lot of beliefs about what B is, without knowing the specifics.

Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.

Essentially, your inference that B is true because Sam says that it is, is the belief that though you don't know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.

Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report "B1 is true," the prior probability that I already know B1 is high.

But this is of course in no sense guaranteed. For example, B might be "I'm wearing purple socks," in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don't in fact know what color socks you are wearing.

In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs).

Again, statistically speaking, sure.

Thinking that B is probably true just is believing you know something about B.

No. You are jumping from "X is reliable evidence of Y" to "X just is Y" without justification.

If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude "X smells good and tastes bad." If "thinking that X smells good just is believing that X tastes good" were true, I would at that point also believe "X tastes good and tastes bad," which is not in fact what happens. Therefore I conclude that "thinking that X smells good just is believing that X tastes good" is false.

Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it's quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that "thinking that B is probably true just is believing [I] know something [important] about B" is false.

In case it matters, not only is it possible for me to believe B is true when I don't in fact know the content of B (e.g., B is "Abrooks' socks are purple" and Sam checks your socks and tells me "B is true" when I neither know what B says nor know that Abrooks' socks are purple), it's also possible for me to have good reason to believe that I don't know the content of B in this situation (e.g., if Sam further tells me "Dave, you don't know the content of B"... which in fact I don't, and Sam has good reason to believe I don't.)