New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 6:22 PM
[-][anonymous]7y50

A quick summary for my own understanding (please feel free to correct me if I'm wrong somewhere).

Basically, when parsing other people's arguments, we can lull ourselves into a false sense of understanding by constructing plausible reasons that could have led them to the conclusions they hold.

However, this misses out on the fact that if people strongly hold a position (which we can only semi-see as plausible with our self-constructed arguments), this disparity in strength of beliefs is in itself good evidence that there is information we are missing. Our generated reasons are probably far from the strongest arguments in favor of the position.

As a result, by self-generating potential ways that someone could believe in something (and leaving it at that) can cause us to miss out on good information that was actually responsible for generating their beliefs.

Is that about accurate?

Yup!

this disparity in strength of beliefs is in itself good evidence that there is information we are missing

That's a nice way of summarizing.

I would emphasize the difference between parsing the arguments they're explicitly making and understanding the reasons they actually hold the beliefs they do.

They may not be giving you the arguments that are the most relevant to you. After all, they probably don't know why you don't already believe what they do. They may be focusing on parts that are irrelevant for convincing you.

By the way, nice job trying to summarize my view. As you'll see in the coming weeks, that's close to the move I recommend for extracting people's intuitions. Just repeatedly try to make their argument for them.

[-][anonymous]7y10

Cool, thanks for writing this up! I vaguely remember someone at CFAR bringing something about argument-norms of this kind--"convince or be convinced". Was that in reference to you?

convince or be convinced

Isn't this kind of like the Aumann agreement theorem?

Are there any humans who meet that lofty standard?

There are certainly people who meet it better than others.

Yes, definitely. The more you are in such a community, the more you can do this.

Not sure! If it was in the last couple months there's a good chance.

Not answering your question but replying to the summary: Apply this reasoning to creationism, or homeopathy, or the belief that Jews kill Christian babies to bake their blood into matzohs.

I can only come up with weak arguments for those positions. Yet people believe them very strongly. Is that good evidence that I am missing something?

From the article

If Paul is at least as sensible as you are and his arguments sound weak or boring, you probably haven’t grokked his real internal reasons.

I don't understand why you are quoting that. Are you trying to suggest that the people arguing for those things have strong arguments that I haven't understood?

("Real internal reasons" in the article seems to mean "arguments" and "real insight", so something like "he believes Jews eat babies because he is prejudiced against Jews" would not count as a real internal reason.)

I don't understand why you are quoting that.

I'm not the person who quoted it, but: I took the point to be that those people are probably mostly not at least as sensible as you. (And maybe secondarily that some of them might have real internal reasons that you haven't understood.)

I'm not sure I agree: although I am (I like to think) more than averagely sensible, I expect there are still some things I believe for reasons that, viewed objectively, are pretty weak and/or boring, and I'd guess the same is true of others.

At the very least, Jiro believes that they are not as sensible as him on those topics.

Not being as sensible as me on these topics isn't the same thing as not being as sensible as me in general.

And how can I conclude that someone is not as sensible as me (only) on some particular topic, without (in effect) first concluding that he's wrong on the topic?

Your points have what seem to me like pretty obvious responses. If this is actually new to you, then I'm very happy to have this discussion.

But I suspect that you have some broader point. Perhaps you think my overall point is misguided or something. If that's the case, then I think you should come out and say it rather than what you're doing. I'm totally interested in thinking about and responding to actual points you have, but I'm only interested in having arguments that might actually change my mind or yours.

But again, if this is actually new, I'm very interested.

On your actual points:

Not being as sensible as me on these topics isn't the same thing as not being as sensible as me in general.

Sure, but they are also very closely related, and knowing about one will help you make inferences about the other.

without (in effect) first concluding that he's wrong on the topic

There are plenty of excellent ways to make educated guesses about how sensible someone is being in a given area.

For example, you might look at closely or not so closely related topics and see if they are sensible there. Or you might look at a few of their detailed arguments and see if they ask the questions you would ask (or similarly good ones). You can see if they respond to counterarguments in advance. You can see if they seem like they change their mind substantially based on evidence. etc. etc. etc.

But as I said, If this is actually new to you, I'm actually super excited to describe further.

Your points have what seem to me like pretty obvious responses. If this is actually new to you, then I'm very happy to have this discussion.

There's a point intermediate between "completely new" and "just being difficult".

I'm obviously not completely new here, but I do honestly find that what you're saying doesn't seem to make much sense.

Sure, but they are also very closely related

If someone thinks that the Earth is flat and perpetual motion machines are real, I can say that he's probably not a very sensible person in general, and then I can conclude he's probably not very sensible on the particular subject of whether Jews eat babies. I don't need to assume anything in particular about the truth of "Jews eat babies" to do this.

But if someone is sensible on other subjects and then suddenly rants to me about how Jews eat babies, I have a much harder time concluding that he is not very sensible on that particular subject without first assuming that he is wrong about that subject.

Or you might look at a few of their detailed arguments and see if they ask the questions you would ask (or similarly good ones).

In order to do that I would have to assume that I know what questions are the right ones and that he does not. Assuming this would amount to assuming that I am right about the subject and he is wrong.

You can see if they seem like they change their mind substantially based on evidence.

Likewise, this would only work if I assume that certain evidence "should" change his mind and that his failure to do so is a mistake, in which case I am again assuming I am right about the subject.

There's a point intermediate between "completely new" and "just being difficult".

Fair enough. To me, your previous words pattern matched very strongly to 'being difficult because they think this is dumb but don't want to say why because it seems like too much work' (or something). My mistake.

I didn't mean new to LW, I meant new to the questions you were posing and the answers you got.

Back on the topic at hand,

In order to do that I would have to assume that I know what questions are the right ones and that he does not. Assuming this would amount to assuming that I am right about the subject and he is wrong.

Consider the following: you meet a friend of a friend who seems reasonable enough, and they start telling you about their startup. They go on and on for a long time but try as you might, you can't figure out how on earth they're going to make money. Finally, you delicately ask "how do you intend to make money?". They give some wishy washy answer.

Here they have failed to ask a question that you know to be important. You know this quite definitely. Even if they thought that the question were somehow not relevant, if they knew it was usually relevant, they would probably explain why its not in this particular case. Much more likely that they are just not very good at thinking about startups.

Similarly, if they anticipate all of your objections and questions, you will probably think they are being pretty reasonable and be inclined to take them more seriously. And rightfully so, that's actually decent evidence.

in which case I am again assuming I am right about the subject

There's a middle ground between 'assuming I am right' and 'assuming they are right'. You can instead be unsure how likely they are to be right, and try to figure it out. One way you can figure it out is by trying to assess whether they seem like they are doing good epistemic things (like do they actually pause to think about things, do they try to understand people's points, do they respond to the actual question, do they make arguments that later turn out to be convincing, do they base things on believable numbers, do they present actual evidence for their views, etc. etc.)

Are you familiar with the idea of 'latent variables' from Bayesian statistics? Are you used to thinking about it in the context of people and the real world? The basic idea is that you can infer hidden properties of things by observing many things it affects (even if it only noisily affects them).

For example, you go to a small school and observe many students doing very impressive science experiments, you might then infer some hidden cause that causes the school to have smart students. Thus you might also guess that in several years, different students at the same school will do well on their SATs, even though that's not directly related to your actual observations.

I suspect thinking a bunch about latent variables in the real world might be useful for you. Especially as it relates to inferring where people are reasonable and how much they are. Especially the idea of using data from different topics to improve your estimate for a given topic (say using test scores from different students to improve your quality estimate for a specific student).

This might be a good starting point: http://www.stat.columbia.edu/~gelman/research/published/multi2.pdf (read until sec 2.3).

Here they have failed to ask a question that you know to be important.

How do I know that the question is important, though? I can't just assume it to be so, or we get the same problem I pointed out--my conclusion that he is worse than me is just being forced by my assumptions.

Of course, if my friend says "yeah, that's important--why didn't I think of that?" then my conclusion is fine. But I think that's going to be pretty rare among creationists, homeopaths, and people who think Jews eat babies.

...thinking a bunch about latent variables in the real world...

That's why I distinguished not being sensible on a topic and not being sensible in general.

If someone is generally not sensible, I can use facts from outside a particular area to conclude that he won't be sensible within a particular area. That's basically using latent variables.

If someone compartmentalizes his lack of sense, so he's only not sensible in one area (for instance, a creationist who is perfectly fine at calculating restaurant tips), this isn't going to work.

Ahhhh, maybe I see what you're complaining about

Are you primarily thinking of this as applying to creationists etc?

The part of the reason I put the caveat 'people about as reasonable as you' in the first place was to exclude that category of people from what I was talking about.

That is not the central category of people I'm suggesting this for. Also, I'm not clear on why you would think it was.

I'm not using creationists as an example because it's central; I'm using it as an example because it's unambiguous. It's really hard to sidetrack the argument by suggesting that maybe the creationists are right after all, or that I'm being arrogant by thinking the creationists are mistaken, etc. so creationists work well as an example.

(And an idea that works for central examples but fails for edge cases is an idea that fails.)

The part of the reason I put the caveat 'people about as reasonable as you' in the first place was to exclude that category of people from what I was talking about.

But if you add that exception, it swallows the rule. Most people think their opponents are more unreasonable than themselves.

(Sorry for the long delay)

Ah, I see why you're arguing now.

(And an idea that works for central examples but fails for edge cases is an idea that fails.)

Ironically, this is not a universal criteria for the success of ideas. Sometimes its a very useful criteria (think mathematical proofs). Other times, its not a very useful idea (think 'choosing friends' or 'mathematical intuitions').

For example the idea of 'cat' fails for edge cases. Is this a cat? Sort of. Sort of not. But 'cat' is still a useful concept.

Concepts are clusters in thing space, and the concept that I am pointing at is also a cluster.

This comment on that post is especially relevant.

Maybe I'm still misunderstanding.

Just a quick note on your main example - in math, and I'm guessing in theoretic areas of CS as well, we often find that searching for fundamental obstructions to a solution is the very thing that allows us to find the solution. This is true for a number of reasons. First, if we find no obstructions, we are more confident that there is some way to find a solution, which always helps. Second, if we find a partial obstruction to solutions of a certain sort, we learn something crucial about how a solution must look. Third, and perhaps most importantly, when we seek to find obstructions and fail, we may find out way blocked by some kind of obstruction to an obstruction, which is a shadow of the very solution we seek to find, and by feeling it out we can find our way to the solution.