This idea isn't totally developed, so I'm putting it in Discussion for now.

Introduction:

 

A few hands have been wrung over how to quickly explain fundamental Less Wrong ideas to people, in a way that they can be approached, appraised, an considered rather than being isolated and bereft across an inferential gulf.

 

I'm pretty embarrassed to say that I hardly talk about these things with people in my everyday life, even though it makes up a major part of my worldview and outlook. I don't particularly care about making people have similar beliefs to me, but I feel like I'm doing my friends a disservice to not adequately explain these things that I've found so personally helpful. (Ugh, that sounds pseudo-religious. Cut me off here if this is a Bad idea.)

 

Would it be useful to start a project (read: group of posts by different people) to find ways to bridge said gaps in normal conversation? (Normal conversation meaning talking to a non-hostile audience that nonetheless isn't particularly interested in to LW ideas). Mainly to talk about rationality things with friends and family members and whatnot, and possibly to help raise the sanity waterline (though this wasn't designed to do that).

 

A problem with the sequences for a nonplussed audience is that it assumes they care. I find that when trying to explain ideas like holding off on proposing solutions until talking about the problem to other people it just comes across as boring, even if they aren't opposed to the idea at all.

 

With an ideological audience, the problem is much more difficult. Not only do you need to explain why something is correct, you need to convince them that believing in it is more important than holding on to their ideology, and that they should lower their "defenses" enough to actually consider it.

 

I think that, should this project be undertaken, it should be very tested and experimental based. Like, people would actually try out the techniques on other people to see if they actually work.

 

Background/Thoughts/Questions:

 

Do we actually want to do this? It seems like its a step towards a possibly PR-damaging evangelism, or just being generally annoying in conversation, among other things. On the other hand, I still want to be able to talk about these things offline every now and then.

 

It's been said that being half a rationalist is dangerous. How do you communicate enough rationality for it to not be dangerous? Or would they have to go all in, and make the casual conversation about it semi-pointless?

 

The inferential gaps that need crossing probably vary a lot by personal background. Once I was able to explain basic transhumanism (death is bad, we can probably enhance ourselves using technology) to someone, and have them agree with an like it almost immediately. Another time, the other person in the conversation just found it gross.

 

There are probably ways of explaining LW concepts to other people that rely on their ideas that would mess up their thinking (i.e. Cognitive Bias explained through Original Sin might be a bad idea). How do you cross into rational ideas from nonrational ones? Should you try to exclusively explain rational ideas based on rational beliefs they already have? Could you reliably explain an idea to someone and expect that to cause them to question what you explained it in terms of (i.e. you explain A in terms of B, but A causes people to reject B)?

 

For talking to an ideological person, I think that the main common goal should be to convince them that a) ideas can be objectively true, b) its good to abandon false beliefs, c) ideological people will rationalize things to fit into their ideology, and "view arguments as soldiers".

 

New Comment
20 comments, sorted by Click to highlight new comments since:

I have a Google doc (so I can fiddle with it wherever I am) with lots of fragments on the subject of toxic infectious memes, including how not to come across as the subject of one. This includes how to actually sell someone on an idea without coming across as a droning crank. Snippets below:

Precis: pull, not push. You have to be interesting and lure them in. Then they'll do the work to close the gap themselves. You cannot inflict it on them, it just doesn't work like that.

If you need to change someone else's mind, you need to actually sell the idea. And this always has to be done with a pull, not a push - attract them to your idea. This is not quick, but push just doesn't convince.

Note that you won't find their true rejection by bluntly asking, as they will detect "sales!" and go defensive, which is quite rational.

(I am awful at selling things for money, but try to sell people on ideas more or less every second I'm writing or talking. How am I doing on this one?)

Some people have an immediate "ugh" reaction to the idea of selling anything in any way, but there are plenty of white-hat means to do so, e.g. 1 2. There is danger of dark arts here, but keep an eye on your moral compass and you'll be fine.

To herd cats, first work out the local value of tuna. (This is my law of volunteer motivation, but it certainly applies here. There is danger of dark arts when you apply this to selling an idea; watch moral compass more closely.)

To herd cats, first work out the local value of tuna.

I'm a bit confused by this. Could you elaborate?

Stretching a metaphor "to herd cats": attempting to control the uncontrollable. Volunteers will work ten times as hard as any paid worker, but only on what they want to. So getting volunteers to do what you want them to is commonly compared to herding cats.

Cats can't be herded - they do whatever they want to, which will not be to follow in an orderly fashion. However, you can get them to come to you very quickly and effectively if you have a can of tuna to hand. Similarly, you can get volunteers to work productively by making what you want them to do highly attractive to them.

That is, you apply a "pull" rather than a "push".

I mentioned it because you can use this a bit in attracting people to your ideas. The danger is that they'll cargo-cult the idea instead of understanding it, and that they'll start following you. For extra loss, have both happen at once. For extra evil, don't be worried that this happened. For maximal evil, invoke it deliberately. So if you want to use this to attract people to your ideas and you have a functioning moral compass, you need to be quite careful.

It's important to distinguish between the goal of communicating with others in ways that feel natural (e.g., without constantly having to back up and fill over inferential gaps), vs. the goal of sharing with others the Good News about rationality.

I think you are talking about the latter, though I'm not 100% sure.

The best way I know of to do that is not to talk about rationality at all... indeed, not to talk about cognition at all. Rather, I talk with people about specific things in the world they consider worth talking about, and I share my thinking about those things with them.

If my thinking about that subject is clear and cogent and well-grounded in observable reality and statistical reasoning, and if it leads me to useful conclusions, then I am demonstrating the usefulness of such thinking; consistently demonstrated usefulness has a way of compelling people, especially if I don't appear to be trying to convince them of anything.

Conversely, if my thinking about that subject is incoherent or grounded in ideology or doesn't lead to useful conclusions, then I'm unlikely to be much help (though the conversation might still be useful, for example, if it leads me to a clearer understanding).

Precis: show, don't tell. Lure by example.

This often requires a real-life test case, which may not be convenient.

Well, yes, but it's really hard to get away from the need to observe reality if we care about the difference between truths and compelling falsehoods.

Sure, among people with some level of trust or shared theoretical background it's fine to have proxies for direct observation (like trustworthy reports of observations, or careful speculation grounded in previously accepted theory, or even analogies to other systems generally understood to be similar). Which is, as you say, convenient.

But those proxies tend to break down when talking to "outsiders," which seemed to be the context. (Though as I said initially, I was not 100% sure of the context.) If an evolutionary biologist tells me that recent genome research shows a particular fact about the evolutionary history of primates, I will have a lot of confidence in that fact, but if I'm talking to a creationist that will cut no ice.

That said: those proxies are also easily abused by "insiders" to promote beliefs that just ain't so, which is one reason periodically talking to "outsiders" is valuable even if one isn't convincing them of anything.

It's also why I prefer to avoid conversations that are just about showing other people that they're wrong and I'm right. (Which is not to say I always succeed in avoiding them.)

Admittedly, there's always the option of giving up on demonstrating the benefits of rationality and instead relying on rhetorical techniques for convincing people they ought to be rational. It's best to use that approach only when I'm "certain" that I'm correct, though.

It's important to distinguish between the goal of communicating with others in ways that feel natural (e.g., without constantly having to back up and fill over inferential gaps), vs. the goal of sharing with others the Good News about rationality. That's a useful distinction, and thee are aspects of both. I'm more personally annoyed with the former, finding it difficult to explain these things in a non-boring/confusing way.

I'd be lying to say that I'm not interested in the latter, but I still find that to be a bit on the creepy side... Though reading through the post again its really mostly talking about that, on the next draft I'll emphasize the other more.

On your other point when I do that I don't think the mode of thinking is transferred enough. A lot of times I'll do that and people will agree, but think that it was just me being insightful. I'd rather have them learn how to do it too.

A lot of times I'll do that and people will agree, but think that it was just me being insightful. I'd rather have them learn how to do it too.

Sure, that makes sense. That said, IME developing a reputation for being insightful makes it a lot more likely that people will learn from me. Also, practicing transparency helps... that is, getting better over time at describing my thinking in ways people can follow.

Totally agreed. I'd just like a bit of help on that last part of describing it/teaching it well, so that it doesn't sound too trivial to remember, or too abstract to care about.

Gotcha.

Well, here too, concreteness helps. Do you have any specific cases in mind where you've attempted to describe your clear thinking about a particular issue, but felt that the result sounded too trivial or too abstract?

Right now the only example I have is your post, about which you've gotten a fair amount of feedback from various people... has any of that feedback been helpful along this axis? More generally, if you had to rewrite it to make your thinking more transparent to your readers, would you change anything?

Also, you might consider separating your "proposal" into two sections, one of which is labeled "introduction" or "background." As it is, much of your proposal isn't actually a proposal.

Yeah, that makes more sense. Thanks.

You have some formatting issues.

Oh. Ew. Thanks for the heads up, that was pretty unreadable.

The "holding off on proposing solutions" example made me imagine a scenario:

You're going to a meeting to address some problem x, and you have enough authority so that you can say something at the start and have people listen to you. So you say "To start off, can we take a minute just to talk about problem x, and not about what to do about x? I just want to be sure that we're all clear about what the problem is before we get caught up in a discussion of particular solutions to it." Then hopefully the meeting goes well - the group holds off on proposing solutions and eventually comes to a good solution.

And that's it, for the time. You've given people a taste of rationality that you can build on. They didn't care about rationality, they cared about problem x, but you showed them that a certain way of thinking and doing things helped them deal with problem x.

You can build on that. For instance, maybe after the meeting you're talking with someone who was there about the fact that the meeting went well, and you could give them a little more detail about what you did: "Once people start talking about possible solutions, they get locked into their first suggestions and they have trouble thinking about the problem creatively and coming up with other solutions." Or maybe there will be another meeting where you suggest holding off on proposing solutions, and you could say a little something more about it then. After a couple more cases where one of your rationality tools has been useful, people may notice a pattern and comment on it, and you could say something more general about rationality, like "There's actually a whole field of study that looks at people's thinking, trying to figure out what people can do to come up with good ideas and avoid making mistakes. I've just read a little bit about what they've found and picked up a few tricks." If you find someone who is interested and asks you questions, then you could go into more detail and give them a more thorough explanation of some rationality topic, or explain some of the fundamentals of rationality, or refer them to this blog.

The general principles from this example: Bring up rationality topics when they're relevant. Be brief. Keep the focus mostly on the issue that people actually care about, not on the meta-level of rationality. Show that rationality works (helps people do things they care about). Don't worry if your explanation is very limited or imperfect. You can build your message over time, through multiple conversations which incorporate bits of rationality. You can have a more in depth conversation about rationality if someone becomes interested enough.

On my robotics team I've pretty much done that (the "lets talk about the problem before the solutions") approach. It worked pretty well, and everyone pretty much converged on the same ideas, which seem to be working.

The annoying part is that I'm not sure how well people internalized it.

I think this is very good. Use whatever techniques and modes of argument appear to work, allowing only a glimpse of the whole LW background. Then let them come to you.

[-][anonymous]00

On my robotics team I've pretty much done that (the "lets talk about the problem before the solutions") approach. It worked pretty well, and everyone pretty much converged on the same ideas, which seem to be working so far.

The annoying part is that I'm not sure if other people particularly internalized it.

"It's been said that being half a rationalist is dangerous. How do you communicate enough rationality for it to not be dangerous? "

I think this is quite a hurdle. I struggle a lot when I think about how to get others to be more rational, or communicating ideas that people might only understand half of. The Sequences are great but they tend to build from the ground up, and so it's hard for them to seem relevant. Perhaps a reductive approach? So we try to begin with essays/tools/models that are more relevant to thought processes that the layman would regularly engage in, before asking them to then explore those thought processes.

I think a good starting point is the "the map and the territory" metaphor. This neatly captures the main point of being less wrong -- the focus on having a more accurate mental model of reality. Other more advanced concepts can come later.