Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Tyrrell_McAllister comments on The Danger of Stories - LessWrong

9 Post author: Matt_Simpson 08 November 2009 02:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 10 November 2009 03:55:19AM 6 points [-]

It is often worthwhile to listen to intelligent people, even if they are fantastically wrong about basic facts of the very subject that they're discussing. One often hears someone reasoning within a context of radically wrong assumptions. A priori, one would expect such reasoning to be almost wholly worthless. How could false premises lead to reliable conclusions?

But somehow, in my experience, it often doesn't work that way. Of course, the propositional content of the claims will often be false. Nonetheless, within the system of inferences, substructures of inferences will often be isomorphic to deep structures of inferences following from premises that I do accept.

The moral reasoning of moral realists can serve as an example. A moral realist will base his moral conclusions on the assumption that moral properties (such as good and evil) exist independently of how people think. His arguments, read literally, are riddled with this assumption through-and-through. Nonetheless, if he is intelligent, the inferences that he makes often map to highly nontrivial, but valid, inferences within my own system of moral thought. It might be necessary to do some relabeling of terms. But once I learn the relabeling "dictionary", I find that I can learn highly nontrivial implications of my premises by translating the implications that the realist inferred from his premises.

Comment author: AdeleneDawner 10 November 2009 04:22:18AM 2 points [-]

Interesting idea. I'm not sure I completely understand it, though. Could you give an example?

Comment author: Tyrrell_McAllister 11 November 2009 01:23:31AM *  5 points [-]

Interesting idea. I'm not sure I completely understand it, though. Could you give an example?

Here's a made-up example. I chose this example for simplicity, not because it really represents the kind of insight that makes it worthwhile to listen to someone.

Prior to Darwin, many philosophers believed that the most fundamental explanations were teleological. To understand a thing, they held, you had to understand its purpose. Material causes were dependent upon teleological ones. (For example, a thing's purpose would determine what material causes it was subjected to in the first place). These philosophers would then proceed to use teleology as the basis of their reasoning about living organisms. For example, on seeing a turtle for the first time, they might have reasoned as follows:

Premise 1: This turtle has a hard shell.

Premise 2: The purpose of a hard shell is to deflect sharp objects.

Conclusion: Therefore, this turtle comes from an environment containing predators that attack with sharp objects (e.g., teeth).

But, of course, there is something deeply wrong with such an explanation. Insofar as a thing has a purpose, that purpose is something that the thing will do in the future. Teleology amounts to saying that the future somehow reached back in time and caused the thing to acquire properties in the past. Teleology is backwards causation.

After Darwin, we know that the turtle has a hard shell because hard shells are heritable and helped the turtle's ancestors to reproduce. The teleological explanation doesn't just violate causality---it also ignores the real reason that the turtle has a shell: natural selection. So the whole argument above might seem irredeemably wrong.

But now suppose that we introduce the following scheme for translating from the language of teleology to Darwinian language:

"The purpose of this organism's having property X is to perform action Y."


"The use of property X by this organism's ancestors to perform action Y caused this organism to have property X.

Applying this scheme to the argument above produces a valid and correct chain of reasoning. Moreover, once I figure out the scheme, I can apply it to many (but not all) chains of inferences made by the teleologist to produce what I regard to be correct and interesting inferences. In the example above, I only applied the translation scheme to a premise, but sometimes I'll get interesting results when I apply the scheme to a conclusion, too.

Of course, not all inferences by the teleologist will be salvageable. Many will be inextricably intertwined with false premises. It takes work to separate the wheat from the chaff. But, in my experience, it often turns out to be worth the effort.

Comment author: AdeleneDawner 11 November 2009 02:21:34AM 2 points [-]

Good thing I asked; that wasn't what I originally thought you meant. It's similar enough to translating conversational shorthand that I probably already do that occasionally without even realizing it, but it'd be good to keep in in mind as a tool to use purposely. Thanks. :)

Comment author: Tyrrell_McAllister 13 November 2009 10:21:03PM 0 points [-]

I'm curious: What did you think I meant?

It's similar enough to translating conversational shorthand that . . .

I probably shouldn't have used the term "translation". Part of my point is that the "translation" does not preserve meaning. Only the form of the inference is preserved. The facts being asserted can change significantly, both in the premises and in the conclusion. (In my example, only the assertions in the premises changed.) In general, the arguer no longer agrees with the inference after the "translation". Moreover, his disagreement is not just semantic.

Comment author: AdeleneDawner 14 November 2009 05:21:57PM 2 points [-]

I'd somehow gotten the idea that you were talking about taking the proposed pattern of relationships between ideas and considering its applicability to other, unrelated ideas. As an extremely simple example, if the given theory was "All dogs are bigger than cats", make note of the "all X are bigger than Y" idea, so it can be checked as a theory in other situations, like "all pineapples are bigger than cherries". That seems like a ridiculously difficult thing to do in practice, though, which is why I thought you might have meant something else.

Regarding 'translation', yep, I get it.