Related: Cached Thoughts

Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".

Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.

I went away and formulated the concept of a "Philosophical Landmine".

I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".

The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.

It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.

That is, "consequentialism" was a Philosophical Landmine.

In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you're lucky, that's all. If not, you start chanting counter-slogans and the whole thing goes supercritical.

It's usually not so bad, and no one is literally "chanting slogans". There may even be some original phrasings involved. But the conversation has been derailed.

So how do these "philosophical landmine" things work?

It looks like when a lot has been said on a confusing topic, usually something in philosophy, there is a large complex of slogans and counter-slogans installed as cached thoughts around it. Certain words or concepts will trigger these cached thoughts, and any attempt to mitigate the damage will trigger more of them. Of course they will also trigger cached thoughts in other people, which in turn... The result being that the conversation rapidly diverges from the original point to some useless yet heavily discussed attractor.

Notice that whether a particular concept will cause trouble depends on the person as well as the concept. Notice further that this implies that the probability of hitting a landmine scales with the number of people involved and the topic-breadth of the conversation.

Anyone who hangs out on 4chan can confirm that this is the approximate shape of most thread derailments.

Most concepts in philosophy and metaphysics are landmines for many people. The phenomenon also occurs in politics and other tribal/ideological disputes. The ones I'm particularly interested in are the ones in philosophy, but it might be useful to divorce the concept of "conceptual landmines" from philosophy in particular.

Here's some common ones in philosophy:

  • Morality
  • Consequentialism
  • Truth
  • Reality
  • Consciousness
  • Rationality
  • Quantum

Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.

My purpose in bringing this up is mostly to precipitate some terminology and a concept around this phenomenon, so that we can talk about it and refer to it. It is important for concepts to have verbal handles, you see.

That said, I'll finish with a few words about what we can do about it. There are two major forks of the anti-landmine strategy: avoidance, and damage control.

Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don't say it. If something you say makes people go off on a spiral of bad philosophy, don't get annoyed with them, just fix what you say. This is just being a communications consequentialist. Figure out which concepts are landmines for which people, and step around them, or use alternate terminology with fewer problematic connotations.

If it happens, which it does, as far as I can tell, my only effective damage control strategy is to abort the conversation. I'll probably think that I can take those stupid ideas here and now, but that's just the landmine trying to go supercritical. Just say no. Of course letting on that you think you've stepped on a landmine is probably incredibly rude; keep it to yourself. Subtly change the subject or rephrase your original point without the problematic concepts or something.

A third prong could be playing "philosophical bomb squad", which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.

ADDENDUM: Since I didn't make it very obvious, it's worth noting that this happens with rationalists, too, even on this very forum. It is your responsibility not to contain landmines as well as not to step on them. But you're already trying to do that, so I don't emphasize it as much as not stepping on them.

New Comment
146 comments, sorted by Click to highlight new comments since: Today at 11:56 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I love the landmine metaphor - it blows up in your face and it's left over from some ancient war.

1BillyOblivion11y
Not always from some ancient war.
0hankx778711y
This is quite true.

This is insightful. I also think we should emphasize that it is not just other people or silly theistic, epistemic relativists who don't read Less Wrong who can get exploded by Philosophical Landmines. These things are epistemically neutral and the best philosophy in the world can still become slogans if it gets discussed too much E.g.

Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out,

Now I wasn't there and I don't know you. But it seems at least plausible that that is exactly what your sister felt she was doing. That this is how having your philosophical limbs getting blown off feels like from the inside.

I think I see this phenomena most with activist-atheists who show up everywhere prepared to categorize any argument a theist might make and then give a stock response to it. It's related to arguments as soldiers. In addition to avoiding and disarming landmines, I think there is a lot to be said for trying to develop an immunity. So that even if other people start tossing out slogans you don't. I propose that it is good policy to provisionally accept your opponent's claims and then let your own arguments do th... (read more)

"Hmm ... what do you mean by 'randomly'?"

YES. The most effective general tactic in religious debates is to find out what exactly the other guy is trying to say, and direct your questions at revealing what you suspect are weak points in the argument. Most of this stuff has a tendency to collapse on its own if you poke it hard enough -- and nobody will be able to accuse you of making strawman arguments, or not listening.

The most effective general tactic in religious debates is to find out what exactly the other guy is trying to say, and direct your questions at revealing what you suspect are weak points in the argument.

Of course that goes for all debates, not just religious ones.

6NancyLebovitz11y
That one makes me crazy. How can anyone think they know enough about universes to make a strong claim about what sorts of universes are likely?
3thomblake11y
I don't understand how the Atheist gets from the Theist's claims about the creation of the universe to "natural selection". I thought that was the bad pattern-matching in the first example, but then they make the same mistake in the second example. Does the Atheist think the universe is an evolved creature?
1Jack11y
The atheist doesn't think the universe evolved; he thinks the complex things in the universe evolved. The theist I was modeling is thinking of -say- the human eyeball when he thinks about the complexity of the universe. But I agree the shorthand dialogue is ambiguous.
4DavidAgain11y
Some complex things in the universe evolved. But plenty didn't. The 'fine-tuning' of physical constants arguments is quite fashionable at the moment. Is interesting how easy it is to project a certain assumption onto arguments, though. Because your atheist response assumed natural selection, while a response above protests that the theist hasn't experienced enough universes to generalise about them, so presumably interprets the statement to refer to the universe as a whole. All the more reason to make sure you understand what people mean, I suppose!
2[anonymous]11y
Yep. Fixed that to be more clear. Also added an addendum reflecting your first point (not just irrationalists).
-4Maybe_a11y
You're right. Exactly. Unless there are on order of $2^KolmogorovComplexity(Universe)$ universes, the chance of it being constructed randomly is exceedingly low. Please, do continue.
[-][anonymous]11y140

Unless there are on order of $2^KolmogorovComplexity(Universe)$ universes, the chance of it being constructed randomly is exceedingly low.

An extremely low probability of the observation under some theory is not itself evidence. It's extremely unlikely that the I would randomly come up with the number 0.0135814709894468, and yet I did.

It's only interesting if there is some other possibility that assigns a different probability to that outcome.

1loup-vaillant11y
I can't assign any remarkable property to the number you came up with, not even the fact that I came up with it myself earlier (I didn't). So of course I'm not surprised that you came up with it. Our universe, on the other hand, supports sentient life. That's quite a remarkable property, which presumably spans only a tiny fraction of all "possible universes" (whatever that means). Then, under the assumption our universe is the only one, and was created randomly, then we're incredibly lucky to exist at all. So, there is a good deal of improbability to explain. And God just sounds perfect. Worse, I actually think God is a pretty good explanation until you know about evolution. And Newton, to hint at the universality of simple laws of physics. And probably more, since we still don't know for instance how intelligence and conciousness actually work —though we do have strong hints. And multiverses, and anthropic reasoning and… Alas, those are long-winded arguments. I think the outside view is more accessible: technology accomplishes more and more impressive miracles, and it doesn't run on God (it's all about mechanisms, not phlebotinum). But I don't feel this is the strongest argument, and I wouldn't try to present it as knock down. Now, if I actually argue with a theist, I won't start by those. First, I'll try to agree on something. Most likely, non-contradiction and excluded middle. Either there is a God, or there is none, and either way, one of us is mistaken. If we can't agree with even that (happens with some agnostics), I just give up. (No, wait, I don't.)
7A1987dM11y
Or the two of you mean different things by "a God", or even by "there is".
1loup-vaillant11y
First things first. 1. Make sure everyone accept the non-contradiction principle and mark the ones that don't as "beyond therapy" (or patiently explain that "truth" business to them). 2. Make sure everyone use the same definitions for the subject at hand. My limited experience tells me that most theists will readily accept point one, believing that there is a God, and I'm mistaken to believe otherwise. I praise them for their coherence, then move on to what we actually mean by "God" and such. EDIT: I may need to be more explicit: I think the non-contradiction principle is more fundamental than the possible existence of a God, and as such should be settled first. Settling the definition for any particular question always come second. It only seems to come first in most discussions because the non-contradiction principle is generally common knowledge. Also, I do accept there is a good chance the theist and I do not put the same meanings under the same words. It's just simpler to assume we do when using them as an example for discussing non-contradiction: we don't need to use the actual meanings yet. (Well, with one exception: I assume "there is a God" and "there is no God" are mutually contradictory for any reasonable meaning we could possibly put behind those words.)
1Creutzer11y
What exactly do you mean by "good explanation"?

Probable, given the background information at the time.

Before Darwin, remember that the only known powerful optimizations processes where sentient beings. Clearly, Nature is the result of an optimization process (it's not random by a long shot), and a very very powerful one. It's only natural to think it is sentient as well.

That is, until Darwin showed how a mindless process, very simple at it's core, can do the work.

3DavidAgain11y
I've come across this arguments before - I think Dawkins makes it. It shows a certain level of civilised recognition of how you might have another view. But I think there's also a risk that because we do have Darwin, we're quick to just accept that he's the reason why Creation isn't a good explanation. I actually think Hume deconstructed the argument very well in his Dialogue Concerning Natural Religion. Explaining complexity through God suffers from various questions 1) what sort of God (Hume suggests a coalition of Gods, warring Gods, a young or old and senile God as possibilities) 2) what other things seem to often lead to complexity (the universe as an organism, essentially) 3) the potential of sheer magnitude of space and time to allow pockets of apparent order to arise
0loup-vaillant11y
Whose answers tend to just be "Poof Magic". While I do have a problem with "Poof Magic", I can't explain it away without quite deep scientific arguments. And "Poof Magic", while unsatisfactory to any properly curious mind, have no complexity problem. Now that I think of it, I may have to qualify the argument I made above. I didn't know about Hume, so maybe the God Hypothesis wasn't so good even before Newton and Darwin after all. At least assuming the background knowledge available to the best thinkers of the time. The laypeople, however, may not have had a choice but to believe in some God. I mean, I doubt there was some simple argument they could understand (and believe) at the time. Now, with the miracles of technology, I think it's much easier.

Have you ever come up with what appeared (to you) to be a knock-down argument for your position; but only after it was too late to use it in whatever discussion prompted the thought? Have you ever carried that argument around in the back of your skull, just waiting for the topic to come up again so you can deploy your secret weapon?

I have. Hypothesis: At least some of the landmines you describe stem from engaging someone on a subject where they still have such arguments on their mental stack, waiting not for confirmation or refutation but just for an opportunity to use them.

The French, unsurprisingly, have an idiom for this.

The French, unsurprisingly, have an idiom for this.

The English have a word for that.

I was aware of that and was waiting for an excuse to use the idea.

7BerryPick611y
So... Many... Times...
5Desrtopa11y
Of course, since you're not the person who actually believes in the position you're trying to refute, it's much harder to judge what would actually seem like a knockdown argument to someone who does believe it. I've stood by watching lots of arguments where people hammered on the same points over and over in frustration, "how can you not see this is irrefutable!?" that I had long ago filed away for future notice as Not Convincing.

"how can you not see this is irrefutable!?"

If I ever had this tendency it was knocked out of me by studying mathematics. I recommend this as a way of vastly increasing your standard for what constitutes an irrefutable argument.

6[anonymous]11y
I'm not even an amateur at math, but yes yes yes. Doing a few proofs and derivations and seeing the difference between math-powered arguments and not is huge.
2[anonymous]11y
This has often happened to me. Whenever it happens, I immediately steer the direction of the conversation back to the relevant discussion again, instead of just waiting passively for it to arise in the future.

Of course letting on that you think you've stepped on a landmine is probably incredibly rude; keep it to yourself.

"Well, that's a pretty controversial topic and I'd rather our conversation not get derailed."

Sometimes you can tell people the truth if you're respectful and understated.

I find it annoying when people say that sort of thing. I want to respond (though don't, because it's usually useless) along the lines of:

"Yes, it is a controversial topic. It's also directly relevant to the thing we are discussing. If we don't discuss this controversial thing, or at least figure out where we each stand on it, then our conversation can go no further; discussing the controversial thing is not a derailment, it's a necessary precondition for continuing to have the conversation that we've been having.

... and you probably knew that. What you probably want isn't to avoid discussing the controversial thing, but rather you want for us both to just take your position on the controversial thing as given, and proceed with our conversation from there. Well, that's not very respectful to someone who might disagree with you."

1John_Maxwell11y
Fair enough. One way to deal with this issue might be to make it clear that the discussion proceeds conditional on the correctness of their position on the controversial thing. Either side could do this.
0Ford11y
I think that only works if you say "even if that were true, which we don't need to discuss now, I would argue that..." It's much harder to get someone to accept "for the sake of argument" something they strongly disagree with. For example, I would only accept "morality comes from the Bible" if I had a convincing Bible quote to make my point.

If it happens, which it does, as far as I can tell the only effective damage control strategy is to abort the conversation.

My ideal approach (not that I always do so) is more to stop taking sides and talk about the plus and minuses of each side. My position on a lot of subjects does boil down to "it's really complicated, but here are some interesting things that can be said on that topic". I don't remember having problems being bombarded with slogans (well - except with a creationist once) since usually my answer is on the lines of "eh, maybe". (this applies particularly to consequentialism and deontology, morality, rationality, and QM).

I also tend to put the emphasis more on figuring out whether there is actually a substantial disagreement, or just different use of terms (for things like "truth" or "reality").

I use a similar approach, and it usually works. Make clear that you don't think you're holding the truth on the subject, whatever it means, but only that the information in your possess led you to the conclusion youre presenting. Manifest curiosity for the position the other side is expressing, even if it's cached, and even if you heard similar versions of it a hundred times before. Try to drive them out of cached mode by asking questions and forcing them to think for themselves outside of their preconstructed box. Most of all, be polite, and point out that it's fine to have some disagreement, since the issue is complictaed, and that you're really interested in sorting it out.

6Luke_A_Somers11y
This is especially important in the particular case mentioned in the OP - it's easy to jump from seeing someone saying that the truth exists to thinking that they think they have that truth.
[-][anonymous]11y120

My ideal approach (not that I always do so) is more to stop taking sides and talk about the plus and minuses of each side.

Right. I suppose refusing to chant slogans is a good way to avoid the supercritical slogan ping-pong.

I lean towards abort mostly out of learned helplessness, and because if I think about it soberly enough to strategize, I see little value in any strategy that involves sorting out someone's confusion instead of disengaging. Just my personal approach, of course.

Thanks for bringing up alternate damage control procedures; we'll need as many as we can get.

2loup-vaillant11y
Even that can fail, especially with "truth" and "reality". I have a concept behind "truth", and I'm sometimes not even allowed to use it, because the term "truth" has been hijacked to mean "really strong belief" instead (sometimes explicitly so, when I ask). I do try to get them to use "belief", and let me use "truth" (or "correctness") my way, but it's no use. I believe their use of a single word for two concepts mixed them up, and I never managed to separate them back. Also, they know that if they let me define words the way I want, I'll just win whatever argument we're having. It only convince them that my terminology must somehow be wrong. Finally, there is also a moral battle here: the very idea of absolute, inescapable truth whether you like it or not, reeks of dogmatism. As history showed us countless times, dogmatism is Baad™. (The Godwin point is really a fixpoint.)
-1MugaSofer11y
Maybe I have a different strategy in mind to you, but when I try this, I just end up sitting there trying ineffectually to have an actual conversation while they spew slogans at me.

Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts.

Taking the outside view, what distinguishes your approach from hers?

nyan's cached thoughts are better? I mean, it would be nice if we could stop relying on cached thoughts entirely, but there's just not enough time in the day. A more practical solution is to tend your garden of cached thoughts as best you can. The problem is not just that she has cached thoughts but that her garden is full of weeds she hasn't noticed.

2Jack11y
.

"Outside view" refers to some threshold of reliability in the details that you keep in a description of a situation. If you throw out all relevant detail, "outside view" won't be able to tell you anything. If you keep too much detail, "outside view" won't be different from the inside view (i.e. normal evaluation of the situation that doesn't invoke this tool). Thus, the decision about which details to keep is important and often non-trivial, in which case simply appealing to something not being "outside view" is not helpful.

5Jack11y
I took it to be obvious that "taking the outside view" in the grandparent comment meant dropping the detail that 'Nyan and everyone here thinks his cached thoughts are better than his sisters' and so Qiaochu_Yuan's reply was not answering the question.
6Qiaochu_Yuan11y
I have inside view reasons to believe that nyan's cached thoughts are genuinely better. If the point you're trying to make is "repeating cached thoughts is in general not a productive way to have arguments, and you should assume you're doing this by default," I agree but don't think that the outside view is a strong argument supporting this conclusion. And I still think that cached thoughts can be useful. For example, having a cached thought about cached thoughts can be useful.

I agree with all that: my point was just that the question you were replying to asked about the outside view (which in this context I took to mean excluding the fact that we think our cluster of ideas is better than Nyan's sister's cluster of ideas). I'm just saying: rationalists can get exploded by philosophical landmines too and it seems worthwhile to be able to avoid that when we want to even though our philosophical limbs are less wrong than most people's.

Or to put it another way: philosophical landmines seem like a problem for self-skepticism because they keep you from hearing and responding adequately to the concerns of others. So any account of philosophical landmines ought to be neutral on the epistemic content of sloganeering since assuming we're on the right side of the argument is really bad for self-skepticism.

Um, I just wanted to parachute in and say that "cached thought" should not be a completely general objection to any argument. All it means is that you've thought about it before and you're not, as it were, recomputing everything from scratch in real time. There is nothing epistemically suspect about nyan_sandwich not re-deriving their entire worldview on the spot during every conversation.

Cached thoughts can be correct! The germane objection to them comes when you never actually, personally thought the thought that was cached (it was just handed to you by your environment), or you thought it a long time ago and you need to reconsider the issue.

0Jack11y
"Cached thought!" doesn't refute the thought. But OP suggests that persuading his sister of something true was made more difficult by the her cached thoughts on the subject. I've definitely seen similar things happen before. If that is the case then I've probably done it before. I want to be convinced of true things if they are true. So if I have the time and energy it is advantageous to suppress cached thoughts and recompute my answers. Maybe this is only worth doing with certain interlocutors-- people with sufficient education, intelligence and thoughtfulness. Or maybe it is worth doing on a long car ride but not in a rapid-flowing conversation at a party. Part of the problem is that thoughts aren't tagged with this information. Which is why it helps to recompute often and it is often easier to recompute when someone else is providing counter-arguments.
6[anonymous]11y
Not much. It feels reasonable on the inside, but in outside view, everyone is chanting slogans. That's what makes the landmine process so nefarious. If only one person is chanting slogans, it's just a cached thought, but if the response is also a cached thought and causes more, the conversation is derailed and people aren't thinking anymore. (though as Qiaochu says, using chached thoughts is usually not bad). Maybe my self deprecation was too subtle? I'm going to fix it, but internet explorer is too shitty to fix it right now.

This happened to me two days ago, in a workshop on inter-professionalism and conflict resolution, with the word "objective". Which, in hindsight, was kind of obvious, and I should have known to avoid it. The man leading the discussion was a hospital chaplain–a very charismatic man and a great storyteller, but with a predictably different intellectual background than mine. It wouldn't have been that hard to avoid the word. I was describing what ties different health care professionals together–the fact that, y'know, together they're taking care of the same patient, who exists as a real entity and whose needs aren't affected by what the different professionals think they are. I just forgot, for a moment, that people tend to have baggage attached to the word "objective", since to me it seems like such a non-controversial term. In general, not impressed with myself, since I did know better.

Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.

Have mainstream philosophers come up with a solution to that? Can LessWronigans learn from them? Do LessWrongians need to teach them?

Try to come up with the least controversial premise that you can use to support your argument, not just the first belief that comes to mind that will support it.

To use a (topical for me) example. someone at Giving What We Can might support effective charities "because it maximises expected utility so you're oblidged to do so, duh", but "if you can do a lot of good for little sacrifice, you should" is a better premise to rely on when talking to people in general, as it's weaker but still does the job.

3[anonymous]11y
Not yet I think. Maybe the OP will help us talk about the problem and come up with a solution? I would be interested to know if this is a known thing in philosophy.
2whowhowho11y
I thought you were the OP. Insamuch as my questions were a disguised point, it is that flare-ups are rare in professional philosophy, and that this is probably because PP has a praxis for avoiding them, which is learnt by absorption and not set out explicitly.
3[anonymous]11y
Sorry for the ambiguous terminology. OP as in Original Post, not Original Poster. As a first guess on the philosophy thing, I imagine it is structural and cultural. AFAIK they don't really have the kinds of discussions that someone can sneak a landmine into. And they seem civilized enough to not go off on tangents of cached thoughts. I just can't imagine stepping on a philosophical landmine while talking to a professional philosopher. Even if their ideas were wrong, it wouldn't be like that. So I wonder what we can learn from them about handling landmines in the field, besides "sweep your area clear of landmines, and don't go into the field".
0tgb11y
Additional ambiguity of "OP": does it refer to the top-level comment or to the article the comment is on? Does anyone have a good way to make this clear when using the term?
1EricHerboso11y
I've never seen that as an additional ambiguity. I've always understood "OP" to mean "the original article", and never "the top level comment". But maybe this is because I've just never encountered the other use (or didn't notice when someone meant it to refer to the top level comment).

Here's a heuristic: There are no slam-dunks in philosophy.

Here's another: Most ideas come in strong, contentious flavours and watered-down, easy-to-defend flavours.

But water things down enough and they are no longer philosophy.

The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to

... (read more)
8Eugine_Nier11y
Agreed except for this: In my experience the strong flavors are easier to defend, except that they tend to trip people's absurdity heuristics.

I used to occasionally remark that consequentialism is a slam-dunk as far as practical every-day rationality goes; if you care about what happens, you think about what will happen as a result of possible actions. Duh.

The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists ?and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, you

... (read more)

What's gone wrong? "Consequentialism" is used in philosophy as a term for the view that the morality of an action depends only on the consequences of that action.

Worse, the idea of consequentialism (or "utilitarianism") is often taken by people with a little philosophy awareness to mean the view that "the ends justify the means" without regard for actual outcomes — that if you mean well and can convince yourself that the consequences of an action will be good, then that action is right for you and nobody has any business criticizing you for taking it.

What this really amounts to is not "the ends justify the means", of course, but the horrible view that "good intentions justify actions that turn out to be harmful."

Ambiguity on "ends" probably does some damage here: it originally referred to results but also came to have the sense of purposes, goals, or aims, in other words, intentions.

3Viliam_Bur11y
Thanks for making this obvious. Yes, this is the typical consequentialism strawman, but I did not notice how exactly are the meanings of the words redefined.
1Muhd11y
I'm confused. Consequentialists do not have access to the actual outcomes when they are making their decisions, so using that as a guideline for making moral decisions is completely unhelpful. It also seems that your statement that good intentions don't justify the means is false. Consider this counterexample: I have 2 choices A and B. Option A produces 2 utilons with 50% probability and -1 utilon the rest of the time. Choice B is just 0 utilons with 100% probability. My expected utility for option A is +0.5 utilons which is greater than the 0 utilons for option B so I choose option A. Let's say I happen to get the -1 utilon in the coin flip. At this point, it seems the following three things are true: 1. I had good intentions 2. I had a bad actual outcome, compared to what would have happened if I had chosen option (or "means") B 3. I made the right choice, given the information I had at the time Therefore, from this example it certainly seems like good intentions do justify the means. But perhaps you meant something else by "good intentions"? Certainly it is bad to delude yourself, but it seems like that is just a case of having the wrong intentions... that is, you should have intended to prevent yourself from becoming deluded in the first place, proceeding carefully and realizing that as a human you are likely to make errors in judgement.
0ikrase11y
Plus people often are unaware of moral injunctions, or even consequentialist heuristics, and tend to think of highly disruptive ruthlessly optimizing Grindelwald style consequentialism, or else of a variety of failure modes that seem consequentialistic but not actually good such as wireheading, euthanasia, schemes involving extreme inequality or something, etc.
-1Eugine_Nier11y
It doesn't help that consequentialist care about expected utility, which can be a lot closer to intentions than results.
1[anonymous]11y
I rewrote that section to clarify. Does it help?

I wish we had a communal list of specific phrases that cause a problem with verbiage substitutes that appear to work. I have been struggling to avoid phrasing with landmine-speak for... I don't know how long. It's extremely difficult. I would be happy to contribute my lang-mines and replacements to a communal list if you were to create it. I'd go make the post right now, but I don't want to steal karma away from you, Nyan, so I leave you with the opportunity to choose to do it first and structure it the way you see fit.

6[anonymous]11y
First off, it's in main, not discussion. That would be why it's not in the discussion list. Secondly, do-ocracy. If you do it, I will accept your legitimacy as supreme dictator of it, and contribute what I can. I'm working on the next one (and enjoying some quality 4chan time), so you're not stepping on my toes. Also, step on more toes. It's not like anyone can fire you.
4Epiphany11y
Oh. Fixed. Grumbles something about getting more sleep. You are informing me that this is the social norm at LW, or is this your personal idea of the way we should behave? Oh, I do, but I usually choose my battles. Thanks though. starts to consider the best way to structure this list
[-][anonymous]11y190

Secondly, do-ocracy...

You are informing me that this is the social norm at LW, or is this your personal idea of the way we should behave?

I dunno about LW, but that's how legitimacy works in most technical volunteer communities (hackerspaces, open source, etc). I also think it's a good idea.

0beoShaffer11y
As is common with social systems there are a lot of hidden complications, but yes LW do-cratic.
[-][anonymous]11y40

This is good personal advice but a bad norm for a community to hold. Can you see why?

4Larks11y
You might be walking around a landmine in order to get to the nuclear bomb. So long as you're defusing something, I think you're ok. But I agree that we don't want to become too habitual at avoiding such issues.

A third prong could be playing "philosophical bomb squad", which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.

I suspect this is best to do before the "landmine" (which I read as being similar to a factional disagreement, correct me if I'm wrong) gets set off... that is, try to explain away the landmine before it's been triggered or the person you've talked to has expressed any kind of opinion.

4[anonymous]11y
Yes. I was actually thinking of this as a project on its own. Separate from any given operation, it's a good idea to clear the field of landmines. Trying to do it as part of some other conversation is just begging to set something off and destroy the conversation, not just the minesweeping attempt.
3Viliam_Bur11y
For me, reading LW is like removing the landmines. Many articles are essentially: "if you read or hear this, don't automatically react this way, because that's wrong". The problem is, when there are hundreds of landmines, you need to remove hundreds of them. Even if you know exactly in advance where your path will go, and you only clean landmines in the path, there is still at least a dozen of them. And all this explaining takes time. So in order to get some explanation safely across, you first need to educate the person for a while. Which feels like manipulation. And cannot be done when time is short.

The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.

WTF!? Nearly every consequentialist I know is in favor of the Capitalized Moral Theory while admitting that virtue ethics and deontology might work better as everyday heuristics.

5[anonymous]11y
I find that small-c consequentialism is a huge step up from operating on impulse, but "deontological injuctions" are good to special case expensive and tricky situations. However, most of the good deontological injunctions that I know of are phrased like "don't do X, because Z will happen instead of the naive Y", which is explicitly about consequences and expectations. Likewise for "don't do X, because the conclusion that X is right is most likely caused by an error". The difference is more between cacheing and recomputing than between deontology and consequentialism. Actually, I don't even know what the moral theory "Consequentialism" means...
2TheOtherDave11y
Those don't seem like deontological injunctions to me at all. Can you clarify why you call them that?
0[anonymous]11y
I've heard people refer to something that does approximately that job as deontological injunctions. I don't much like the name either. Do you have a real example of deontology outperforming consequentialism IRL? Bonus: do you have one that isn't just hardcoding a result computed by consequentialism? (not intended to be a challenge, BTW; actually curious, because I've heard a lot of people implying, but can't find any)

Do you have a real example of deontology outperforming consequentialism IRL?

I'm not sure what that would look like. If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.

For example, I would say "Don't torture people, no matter what the benefits of doing so are!" is a fine example of a deontological injunction. My intuition is that people raised with such an injunction are less likely to torture people than those raised with the consequentialist equivalent ("Don't torture people unless it does more good than harm!"), but as far as I know the study has never been done.

Supposing it is true, though, it's still not clear to me what is outperforming what in that case. Is that a point for deontological injunctions, because they more effectively constrain behavior independent of the situation? Or a point for consequentialism, because it more effectively allows situation-dependent judgments?

0elspood11y
At least one performance metric that allows for the two systems to be different is: "How difficult is the value system for humans to implement?"
-1whowhowho11y
They can be different metaethically, but the same at the object level.
-5Viliam_Bur11y
6novalis11y
If we look at it from a consequentialist perspective, any success for another meta-ethical theory will boil down to caching a complicated consequential calculation. It's just that even if you have all the time in the world, it's still hard to correctly perform that consequential calculation because you've got all of these biases in the way. Take the classic ticking time-bomb scenario, for an example: everyone would like to believe that they've got the power to save the world in their hands; that the guy they have caught is the right guy; that they have caught just in the nick of time. But nobody can produce an example where this has ever been true. So, the deontological rule turns out to be equivalent to the outside view consequentialist argument for ticking time-bomb scenarios. Naturally, other cases of torture would require more analysis to show the equivalence (or non-equivalence; I have not yet done the research to know which way it would come out).
1whowhowho11y
I don't see how you could perform a meaningful calculation without presuming which system is actually right. Who wants an efficient programme that yields the wrong results?
0BillyOblivion11y
That very much depends on who benefits from those wrong results.
-2whowhowho11y
so I can say moral system A outperforms moral system B just where A serves my selfish purposes better than B. Hmm. Isn't that a rather amoral way of looking at morality?
0BillyOblivion11y
No. It's an honest assessment of the state of the world. I'm not agreeing with that position, I'm just saying that there are folks who would prefer an efficient program that yielded the wrong results if it benefited them, and would engage in all manner of philosophicalish circumlocutions to justify it to themselves.
0whowhowho11y
That's not very relevant to the benefits or otherwise fo consequentialism and deontology.
1whowhowho11y
Are there any non tricky situations? Stepping on a butterfly could have vast consequences.
3simplicio11y
Consequentialism works really well as an everyday heuristic also. Particularly when it comes to public policy.
5Eugine_Nier11y
That would require being able to predict the results of public policy decisions with a reasonable degree of accuracy.
3findis11y
Wouldn't a rational consequentialist estimate the odds that the policy will have unpredictable and harmful consequences, and take this into consideration? Regardless of how well it works, consequentialism essentially underlies public policy analysis and I'm not sure how one would do it otherwise. (I'm talking about economists calculating deadweight loss triangles and so on, not politicians arguing that "X is wrong!!!")
-2Eugine_Nier11y
The discussion was about consequentialist heuristics, not hypothetical perfectly rational agents.
2whowhowho11y
I think the central trick is that you don't aim at the ultimate good in public policy, just things like fairness, aggreegated life years and so on. You can decide that spending a certain amount of money will save X lives in road safety, or Y lives in medicine, and so on, without worrying that you might be saving the life of the next Hitler.
-1Eugine_Nier11y
My point still stands.
1whowhowho11y
Maybe, but it doesn't reflect back on the usefulness of c-ism as a fully fledged moral theory.
1Eugine_Nier11y
This discussion was about consequentialism as an everyday heuristic.
5Jayson_Virissimo11y
I don't know about you, but I don't do public policy on a day to day basis.
0Nornagest11y
Few of us do public policy on a daily basis, but many of us have opinions on it, and of those that don't most of us have friends that do. Not that having correct policy judgment buys you much in that context, consequentially speaking; I think the hedonic implications would probably end up being decided by how much you need to keep beliefs on large-scale policy coherent with beliefs on small-scale matters that you can actually affect nontrivially.
0Jayson_Virissimo11y
People that have little chance of ever actually doing public policy have virtually no incentive to have true beliefs about it and even if they did, they likely wouldn't get enough feedback to know if their heuristics about it are accurate.
3Eugine_Nier11y
Unfortunately, the same frequently applies to the people who actually do do public policy, especially since they are frequently not effected by their own decisions.
0Jayson_Virissimo11y
True enough.

So when you see a moral theory, think heuristic, not dogma.

The more I think about this the more I realize I need a term that means "Even though the people who are now jumping all over me about some philosophical subject are talking about a memetic war that's currently raging, and even though they're not mindless zombies saying completely useless things in support of their points, they've chosen to hyper-focus on some piece of my conversation not essential to the main point, and have derailed the conversation."

I want to use "philosophical landmines" for that, but now that I re-read your article,... (read more)

5[anonymous]11y
Seems to be the same kind of landmine in a different yard. Being on team consequentialism probably isn't much different than being on team green in terms of the us vs. them type psychological mechanisms invoked

This can be a substantial problem in psychology, even when the original "memetic war" has real scientific content and consequences. Debates between behaviorism and cognitive psychology (for example) are often very attractive and useless tangents for anyone discussing the validity of clinical treatments from cognitive and/or behavioral traditions, when the empirical outcomes of treatment are the original topic at hand.

Yes, I invented the idea of "philosophical hurdles" ages ago. For every "topic" there are some set of philosophical hurdles you have to overcome to even start approaching the right position. These hurdles can be surprisingly obscure. Perhaps you don't have quite enough math exposure and your concepts of infinity are a bit off, or you're aren't as solidly reductionistic as others and therefore your concept of identity is marginally weakened - anything like these can dramatically alter your entire life, goals, and decisions - and it can t... (read more)

4Richard_Kennaway11y
Invented for yourself, but such hurdles are very old.
4hankx778711y
Oh, excellent. I'm one of today's lucky 10,000!

I call it something like correctly identifying the distinction(s) relevant to our contextually salient principles. That's more general and the emphasis is different, but it involves the same kind of zeroing in on the semantic content that you intend to communicate.

Sure, you can have principles like consequentialism, but why bring them up unless the principles themselves are relevant to what you're saying at this moment? Principles float in the conversational background even when you're not talking about them directly. Discussing the principle of the thing ... (read more)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]11y00

My current strategy is that I just don't talk to most people about confusing things in general. If I don't expect that they'll actually do anything differently as a result of our having talked, all that's left is signaling, and I don't have a strong need to signal about confusing things right now.

[This comment is no longer endorsed by its author]Reply

Another excellent post, Ryan! (sorry, I blame the autocorrect)

I like your list of the common landmines (actually, each one is a minefield in itself).

Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don't say it.

Right, modeling other people is a part of accurately mapping the territory (see how I cleverly used the local accessible meme, even though I hate it?), which is the first task of rationality.

If it happens, which it does, as far as I can tell, my on

... (read more)
2[anonymous]11y
Lulz. This is the last time that joke will be funny. Are you being especially ornery today? It involves the first one, and is maybe related to it. Ontologies aren't fundamental. Think of it however is convenient to you.
1shminux11y
Ornery, not lighthearted? Geez, talk about miscommunication and landmines. I guess I'll avoid commenting further, at least for today.
7Luke_A_Somers11y
I didn't know how to read that, actually. I came up with some mildly amusing guesses as to what tone that might have been and what might have brought it on, but they were all over the map*. ... * not the territory. They were all in my head.
0Dorikka11y
This post is currently reading at -3, but it's not hidden by default for me. My Preferences page indicates that things below -2 should be hidden. I've checked another comment (at -6), which is correctly hidden. Anyone know why this is occurring? Apologies for the meta.
0Eugine_Nier11y
A while ago Eliezer decreed that comments below -3 are too be hidden, apparently this overrides all preferences.