Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AdeleneDawner comments on The Danger of Stories - LessWrong

9 Post author: Matt_Simpson 08 November 2009 02:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: AdeleneDawner 09 November 2009 08:05:26AM 1 point [-]

My thought wasn't that he wouldn't have anything true to say. It was that if he's still defending good and evil as obviously existing, in that context, he's far enough behind me on the issue that I can safely assume that he doesn't have anything major to teach me, and that what he says is untrustworthy enough (because there's an obvious flaw in his thought process) that I'd have to spend an inordinate amount of time checking his logic before using even the parts that appear good - time that would be better spent elsewhere.

Many people here appear to have a similar epistemic immune response to people who bring up God in discussions of ethics. I'm surprised it's considered an issue in this case.

Comment author: Tyrrell_McAllister 10 November 2009 03:55:19AM 6 points [-]

It is often worthwhile to listen to intelligent people, even if they are fantastically wrong about basic facts of the very subject that they're discussing. One often hears someone reasoning within a context of radically wrong assumptions. A priori, one would expect such reasoning to be almost wholly worthless. How could false premises lead to reliable conclusions?

But somehow, in my experience, it often doesn't work that way. Of course, the propositional content of the claims will often be false. Nonetheless, within the system of inferences, substructures of inferences will often be isomorphic to deep structures of inferences following from premises that I do accept.

The moral reasoning of moral realists can serve as an example. A moral realist will base his moral conclusions on the assumption that moral properties (such as good and evil) exist independently of how people think. His arguments, read literally, are riddled with this assumption through-and-through. Nonetheless, if he is intelligent, the inferences that he makes often map to highly nontrivial, but valid, inferences within my own system of moral thought. It might be necessary to do some relabeling of terms. But once I learn the relabeling "dictionary", I find that I can learn highly nontrivial implications of my premises by translating the implications that the realist inferred from his premises.

Comment author: AdeleneDawner 10 November 2009 04:22:18AM 2 points [-]

Interesting idea. I'm not sure I completely understand it, though. Could you give an example?

Comment author: Tyrrell_McAllister 11 November 2009 01:23:31AM *  5 points [-]

Interesting idea. I'm not sure I completely understand it, though. Could you give an example?

Here's a made-up example. I chose this example for simplicity, not because it really represents the kind of insight that makes it worthwhile to listen to someone.

Prior to Darwin, many philosophers believed that the most fundamental explanations were teleological. To understand a thing, they held, you had to understand its purpose. Material causes were dependent upon teleological ones. (For example, a thing's purpose would determine what material causes it was subjected to in the first place). These philosophers would then proceed to use teleology as the basis of their reasoning about living organisms. For example, on seeing a turtle for the first time, they might have reasoned as follows:

Premise 1: This turtle has a hard shell.

Premise 2: The purpose of a hard shell is to deflect sharp objects.

Conclusion: Therefore, this turtle comes from an environment containing predators that attack with sharp objects (e.g., teeth).

But, of course, there is something deeply wrong with such an explanation. Insofar as a thing has a purpose, that purpose is something that the thing will do in the future. Teleology amounts to saying that the future somehow reached back in time and caused the thing to acquire properties in the past. Teleology is backwards causation.

After Darwin, we know that the turtle has a hard shell because hard shells are heritable and helped the turtle's ancestors to reproduce. The teleological explanation doesn't just violate causality---it also ignores the real reason that the turtle has a shell: natural selection. So the whole argument above might seem irredeemably wrong.

But now suppose that we introduce the following scheme for translating from the language of teleology to Darwinian language:

"The purpose of this organism's having property X is to perform action Y."

becomes

"The use of property X by this organism's ancestors to perform action Y caused this organism to have property X.

Applying this scheme to the argument above produces a valid and correct chain of reasoning. Moreover, once I figure out the scheme, I can apply it to many (but not all) chains of inferences made by the teleologist to produce what I regard to be correct and interesting inferences. In the example above, I only applied the translation scheme to a premise, but sometimes I'll get interesting results when I apply the scheme to a conclusion, too.

Of course, not all inferences by the teleologist will be salvageable. Many will be inextricably intertwined with false premises. It takes work to separate the wheat from the chaff. But, in my experience, it often turns out to be worth the effort.

Comment author: AdeleneDawner 11 November 2009 02:21:34AM 2 points [-]

Good thing I asked; that wasn't what I originally thought you meant. It's similar enough to translating conversational shorthand that I probably already do that occasionally without even realizing it, but it'd be good to keep in in mind as a tool to use purposely. Thanks. :)

Comment author: Tyrrell_McAllister 13 November 2009 10:21:03PM 0 points [-]

I'm curious: What did you think I meant?

It's similar enough to translating conversational shorthand that . . .

I probably shouldn't have used the term "translation". Part of my point is that the "translation" does not preserve meaning. Only the form of the inference is preserved. The facts being asserted can change significantly, both in the premises and in the conclusion. (In my example, only the assertions in the premises changed.) In general, the arguer no longer agrees with the inference after the "translation". Moreover, his disagreement is not just semantic.

Comment author: AdeleneDawner 14 November 2009 05:21:57PM 2 points [-]

I'd somehow gotten the idea that you were talking about taking the proposed pattern of relationships between ideas and considering its applicability to other, unrelated ideas. As an extremely simple example, if the given theory was "All dogs are bigger than cats", make note of the "all X are bigger than Y" idea, so it can be checked as a theory in other situations, like "all pineapples are bigger than cherries". That seems like a ridiculously difficult thing to do in practice, though, which is why I thought you might have meant something else.

Regarding 'translation', yep, I get it.

Comment author: Jonathan_Graehl 09 November 2009 11:17:50PM *  4 points [-]

Quickly judging people as not worth listening to is a fabulous heuristic, especially given the Internet explosion of available alternatives.

But sharing such judgment risks offending people who didn't make the same cut.

Comment author: Vladimir_Nesov 09 November 2009 11:26:05PM 3 points [-]

Following such a heuristic doesn't at all mean making strong high-certainty judgments.

Comment author: AdeleneDawner 09 November 2009 11:33:41PM 2 points [-]

Strength of emotional response and certainty of the underlying heuristic's accuracy aren't the same thing. It may not've been clear that I was reporting the former, but I was, and one of the possible responses to that comment that I was prepared for was "yes, but he went on to make this good point...".

Comment author: Jonathan_Graehl 10 November 2009 12:09:11AM *  1 point [-]

I agree, but the fantastic thing is that you lose so little when you reject too hastily. If the ideas you ignored turn out to be useful and true, someone you're willing to listen to will advocate them eventually.

Comment author: Eliezer_Yudkowsky 10 November 2009 02:11:41AM *  3 points [-]

That works if you assiduously and diligently and without flaw, start paying attention after no more than the third time you hear the idea advocated, and without using the idea itself to judge untrustworthy those who otherwise see competent.

In practice, people usually reject the idea itself and go on rejecting it, when they claim to be acting under cover of rejecting people. Consider those who die of rejecting cryonics; consider what policy they would have to follow in order to not do that. What good is it to quickly reject bad ideas if you quickly reject good ideas as well? Discrimination is the whole trick here.

I suppose we might have no recourse but to judge people and shut our ears to most of them, in the Internet age, but to say that we "lose so little" far understates the danger of a very dangerous policy.

Comment author: Jonathan_Graehl 10 November 2009 08:48:26PM *  2 points [-]

I agree that people often don't make the necessary distinction between ideas they have evidence against, and unevaluated ideas they've been ignoring because they've only heard them advocated by kooks. As you point out, only ideas in the prior category properly discredit their advocates.

Comment author: AdeleneDawner 10 November 2009 02:36:35AM 0 points [-]

There's more than just the one non-failure mode to this kind of thing. My method involves taking the time to consider the information gathered up to the point where I decided to stop listening to the person, as if I hadn't stopped listening to them at all. Information that I would've gotten from them after that point isn't affected by my opinion of them, since I haven't heard it (where it would be, if I were distracted by thinking 'this person's an idiot' as I listened to them), and I give as fair of a trial as I'm able to to the rest.

It may also be noteworthy that I didn't judge him for an argument he was making, and I make something of a point of not doing so unless the logic being used is painfully bad. (Tangential realization: That's why activists who aren't willing to have any 101-level discussions with newbies get a (mild) negative reaction from me; discarding a whole avenues of discourse like that cuts off a valuable, if noisy, source of information.)

Comment author: AdeleneDawner 09 November 2009 11:34:26PM *  1 point [-]

But sharing such judgment risks offending people who didn't make the same cut.

I figured that out, but bringing it up seemed like it would just compound the problem.

Comment author: MichaelBishop 09 November 2009 06:47:52PM 4 points [-]

Based on the first five minutes, the whole point of his lecture is that stories, explicitly including but not limited to those framed as good vs. evil, are often dangerous oversimplifications.

I'm telling you, as someone who has read quite a lot by Tyler Cowen, that he is not as naive about good and evil as you seem to think. You've read too much into the one sentence you've quoted.

Comment author: RobinZ 09 November 2009 01:09:24PM *  4 points [-]

My thought wasn't that he wouldn't have anything true to say. It was that if he's still defending good and evil as obviously existing, in that context, he's far enough behind me on the issue that I can safely assume that he doesn't have anything major to teach me, and that what he says is untrustworthy enough (because there's an obvious flaw in his thought process) that I'd have to spend an inordinate amount of time checking his logic before using even the parts that appear good - time that would be better spent elsewhere.

That's not a good heuristic. There are a lot of people - Eliezer would name Robert Aumann, I think - who are incredibly bright, highly knowledgeable, and capable of conveying that knowledge who are wrong about the answers to what some of us would consider easy questions.

Now, I know Berserk Buttons (warning: TV Tropes) as well as anyone, and I've dismissed some works of fiction which others have considered quite good (e.g. Alfred Bester's The Demolished Man, TV sitcom The Modern Family) because they pushed those buttons, but when it comes to factual information, even stupid people can teach you.

(Granted, you may be right about the worthlessness of this particular speech to you - I haven't watched it. But the heuristic is poor.)

Comment author: AdeleneDawner 09 November 2009 03:10:30PM 1 point [-]

The heuristic isn't widely applicable, but I disagree about it being poor altogether. As I pointed out above, it's not just that he defended good vs. evil. It's that he did it in the context of a presentation on a subtopic of how we conceptualize the world. He may have things to teach me in other areas, obviously.

That's why I compared it to someone bringing God into a discussion on ethics specifically. (Or, say, evolution.) That person may be brilliant at physics, but on the topic at hand, not so much.

It also occurs to me that this heuristic may be unusually useful to me because of my neurology. It does seem to take much more time and effort for me to deconstruct and find flaws in new ideas presented by others, compared to most people, and because of the extra time, there's a risk of getting distracted and not completing the process. It's enough of an issue that even a flawed heuristic to weed out bad memes is (or, feels - I'm not sure how one would actually test that) useful.

Comment author: RobinZ 09 November 2009 03:21:30PM 0 points [-]

Okay, I'll grant you that. It's better to have a sufficiently strict filter that loses some useful information than a weaker filter which lets in garbage data. I would presume (or, at least, advise) that you make a particular effort to analyze data which you previously rejected but which remains widely discussed, however - an example from my own experience being Searle's Chinese Room argument. Such items should be uncommon enough.

Comment author: AdeleneDawner 09 November 2009 04:07:51PM 1 point [-]

Agreed.