To me, charitable reading and steelmanning are rather different, though related.
To read charitably is to skip over, rather than use for your own rhetorical advantage, things in your interlocutor's words like ambiguity, awkwardness, slips of tongue, inessential mistakes. On the freeway of discussion, charitable reading is the great smoother-over of cracks and bumps of "I didn't mean it like that" and "that's not what it says". It is always a way towards a meeting of the minds, towards understanding better What That Person Really Wanted To Say - but nothing beyond that. If you're not sure whether something is a charitable reading, ask yourself if the interlocutor would agree - or would have agreed when you're arguing with a text whose author is absent or dead - that this is what they really meant to say.
I prefer "charitable reading" and not "the principle of charity" because the latter might be applied very broadly. We might assume all kinds of things about the interlocutor's words acting out of what we perceive as charity. For example, "let's pretend you never said that" in response to a really stupid or vile statement might strike ...
...Steelmanning is not a courtesy or a service to my interlocutor. It is a service to me. It is my attempt to build the strongest case I can against my position, so I can shatter it or see it survive the challenge. The interlocutor might not agree, if I were to ask them, that my steelmanned argument is really stronger than theirs; that's no matter. I'm not doing it for them, I'm doing it for myself.
Steelmanning is always done for your own sake. It always says something new that the original owner of the argument didn't think of or at least didn't say. When put back into the discussion, it should be introduced explicitly as your words.
Remember, the steelmanned argument is your creation and is meant for you, you owe it to yourself to test your beliefs with it, but not necessarily in the context of this conversation. Not because concealing it is an easier way to victory, but rather because what's steelmanned for you might not be steelmanned or even interesting to your interlocutor. Their argument said A, and you may have found a way to strengthen it further to say B, but they might not want to claim B, to defend B, to agree that B is stronger than A. That said, if you do think the steel
I'm reminded of Bret Victor's recent comment on reading Latour:
It’s tempting to judge what you read: "I agree with these statements, and I disagree with those." However, a great thinker who has spent decades on an unusual line of thought cannot induce their context into your head in a few pages. It’s almost certainly the case that you don’t fully understand their statements. Instead, you can say: "I have now learned that there exists a worldview in which all of these statements are consistent." And if it feels worthwhile, you can make a genuine effort to understand that entire worldview. You don't have to adopt it. Just make it available to yourself, so you can make connections to it when it's needed.
That, to me, is a principle of charity well applied. I wouldn't at all say that steelmanning is a stronger form of that -- a rationalist trying to steelman Latour would be like your Roman trying to steelman progressivism. Steelmanning is about constructing what you see as stronger versions of an argument, while the principle of charity is about trying to get into your interlocutor's head under the assumption that whatever they're saying or doing seems reasonable...
This is my favorite quote in several months :). You should add it to the Rationality Quotes thread.
I no longer try to steelman BETA-MEALR [Ban Everything That Anyone Might Experience And Later Regret] arguments as utilitarian. When I do, I just end up yelling at my interlocutor, asking how she could possibly get her calculations so wrong, only for her to reasonably protest that she wasn’t make any calculations and what am I even talking about?
I consider steelmanning to be a safeguard against reversing stupidity, especially in political contexts. If my opponent says X, I am likely to say non-X for many bad reasons such as: my opponent defends X using arguments I disagree with.
But if you show me that X can also be defended using arguments I would agree with, then I will be less likely to automatically throw X away, and I will be more able to consider X on its own merits.
Steelmanning is good for understanding "X can also be defended by good arguments", and is dangerous because it provides a bad model of my opponent. (Unless my original model was so bad that the steelmanning didn't make it worse; which wouldn't be completely unexpected in a political debate.)
In your example, the time-travelling Roman would get a completely bad idea about why Obama wants to increase unemployment benefits. But he would get a useful insight about why he might want to support increasing unemployment benefits. It's bad for modelling Obama, it's good for thinking about possible consequences of the unemployment benefits. (If you were an Annona-worshipping Roman, you would want to realize she would be happy about the unemployment benefits...
The Wikipedia formulation is "write for your enemy", i.e. state their position sufficiently well that they would accept you have stated their position sufficiently well. This is useful in that they are generally present and will let you know in no uncertain terms if you've failed to achieve this. This is only a guideline, as unreasonable opponents do exist and the stretch is so that you can write a better neutral article.
But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't?
I never thought that steelmanning implied necessarily assuming that A would agree with the steelmanned version. If A says something that seems to have a reasonable point behind it but is expressed badly, then yes, in that case the steelmanned version can be something that they'd agree with. But they might also say something that was obviously wrong and not worth engaging with - but which nonetheless sparked an idea about something that was more reasonable, and which might be interesting to discuss.
In either case, we've replaced a bad argument with a better one that seems worth considering and discussing. Whether or not A really intended the argument to be understood like that doesn't matter that much.
To take a more concrete example, in What Data Generated That Thought?, I wrote:
...All outcomes are correlated with causes; most stateme
There's another way it can go wrong:
"You claim X, which sounds pretty bizarre to me so I'll charitably assume you meant a weaker version X' that fits in my worldview, and I'll forget that you originally claimed an argument for X."
Excellent points. I've never been a huge fan of steelmanning. A couple more:
People talk as if steelmanning is inherently a virtue, but in practice they're selective about what they steelman. You won't see many steelmannings of Young Earth Creationism around these parts--or even plain vanilla theism. If people are going to steelman, it would be nice for them to be more up-front about why they chose to steelman this particular argument (or when they're telling someone else "hey why aren't you steelmanning the person you're attacking," be upfront about what that particular argument deserves steelmanning.
If you choose which arguments to steelman more or less at random, or for bad reasons, it seems like it's a violation of privileging the hypothesis.
Steelmanning is optimal * when looking for true beliefs about the world **, as long as you're aware that the source of the argument only provided a weaker form of the argument ***.
* In an environment without any resource constraints, which unfortunately never is the case. Still, if you got time on your hands and nothing else to do ...
** Arguments in their maximally persuasive form have more potential to shift your beliefs in the correct direction. Neglecting a potential strong form of an argument is tantamount to ignoring evidence.
*** So steelmanning the cold fusion crackpot's argument may have brought you to firmly believe in cold fusion, that's fine as long as you don't forget that the crackpot still believes in the right conclusion for the wrong reasons (the weak form of the argument), and is as such still a crackpot.
I endorse all of these problems as real. Too much steelmanning blinds to the reality of the rather incompetent civilization and malfunctioning species we live in.
Issue 1 above has nothing to do with losing sight of how incompetent our civilization and most of its individuals are. It's about almost the opposite problem: trying to be charitable to someone else by adjusting their position to be more like your own, at the risk of messing it up in the process.
[EDITED because I wrote "Issue 2" where I meant "Issue 1", and also to fix up a minor consistency arising from the fact that an earlier draft had had "Issues 1 and 2".]
"The dole was there because it made the emperor more popular" and that is the same reason it exists today. Charitable social policies exist primarily to buy votes. Take Head Start as one of many, many examples of failed programs: http://nypost.com/2010/01/28/head-start-a-tragic-waste-of-money/. $166 billion wasted on a program that is demonstrably no help. It seems to be a dismal failure, but continues to exist, because it sounds good and gets votes. The reason why there are so many seemingly failed government programs, is because those programs ...
Great article, I hadn't heard this argument before but I think it's a good point. I'll also mention that I think the Ideological Turing Test does a good job of combating some of your worries here, although of course has its own dangers.
I suspect there's a difference between steelmanning as in removing unnecessary assumptions or context, and steelmanning as in completely changing the logical foundation of the argument, just retaining the bottom line proposition, as our Roman seems to be doing. They are both valid, with the second one more vulnerable to the problems you mention, especially #1.
Also, either way, it would be a mistake to take the steelmanned argument and attribute it back to the source of the original argument. This seems to be the cause of your problems #2 and #4, maybe also...
I think it would be useful to identify subcategories of what people mean by steelmanning and then see if we can approve some of those.
Very good, although I have heard similar arguments (though less elaborated) in conversation. The principle of charity (or steelmanning - never heard that term before) certainly is important, but sometimes it just goes too far. At one seminar I used to attend the seminar leader used to "re-interpret" the most confused and illogical argument, saying - "did you mean so and so", to which the interpretee of course invariably gratefully responded yes (though of course he never in his life had come up with such an interesting argument). The wh...
Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before.
Rather than - or at least in addition to - being a bug, this strikes me as one of charity's features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.
As someone who steelmans and interprets other people charitably a lot, I hadn't thought of the problems this could cause. I've managed to change my mind about a lot of things in the past few years; I wonder how much of this is because I didn't have any beliefs I held very strongly before, and don't hold many of my current beliefs all that strongly either.
Original thoughts arriving in the form of difficult-to-express intuitions is a common phenomenon. Early analytic philosophers were struggling with the right way to express the intuition that "greater(7, 5) = true" and "lesser(5, 7) = true" represent the same fact. Now we know that the correct answer is to derive both as consequences of the same abstract model of the relevant entities (such as natural numbers) whose existence is to an extent independent of the language used to describe it. The function of the model is to take the way a l...
It seems to me there are too separate issues.
1) Do you act like other people actually SAID the better argument (or interpretation of that argument) that you can put in his mouth?
2) Do you suggest the better alternative in debates and discussions of the idea before arguing against it.
2 is certainly a good idea while all the problems come from item 1. Indeed, I would suggest that both parties do best when everyone ACTS LIKE OTHER PEOPLE SAID WHATEVER YOU JUDGE TO BE MOST LIKELY THEY ACTUALLY INTENDED TO SAY. So you don't don't then on misspeaking nor do ...
Indeed, I think a huge reason for the lack of useful progress in philosophy is too much charity.
People charitably assume that if they don't fully understand something (and aren't themselves an expert in the area) the person advancing the notion is likely contributing something of value that you just don't understand yet.
This is much of the reason for the continued existence of continental philosophy drivel like claims that set theory entails morality or the deeply confused erudite crap in Being and Time. Anyone who isn't actually an expert in this kind of...
Not sure if I properly understood the original post - apologies if I'm just restating points already made, but I see it like this.
Whatever it consists of, it's pretty much the definition of rationality that it increases expected utility. Assuming that the intermediate objective of a rationalist technique like steelmanning is to bring us closer to the truth, then there are 2 trivial cases where steelmanning is not rational:
(1) When the truth has low utility. (If a lion starts chasing me, I will temporarily abandon my attempt to find periodicity in the digit...
This is a very nice post that highlights an important issue that I hadn't previously been fully conscious of.
Thank you for writing that.
To me steelmanning sometimes feels like rationalizing, only instead of rationalizing your position, you rationalize your opponent's. It might be still useful, though.
As far as I can tell, most people around these parts consider the principle of charity and its super saiyan form, steelmanning, to be Very Good Rationalist Virtues. I basically agree and I in fact operate under these principles more or less automatically now. HOWEVER, no matter how good the rule is, there are always exceptions, which I have found myself increasingly concerned about.
This blog post that I found in the responses to Yvain's anti-reactionary FAQ argues that even though the ancient Romans had welfare, this policy was motivated not for concern for the poor or for a desire for equality like our modern welfare policies, but instead "the Roman dole was wrapped up in discourses about a) the might and wealth of Rome and b) goddess worship... The dole was there because it made the emperor more popular and demonstrated the wealth of Rome to the people. What’s more, the dole was personified as Annona, a goddess to be worshiped and thanked."
So let's assume this guy is right, and imagine that an ancient Roman travels through time to the present day. He reads an article by some progressive arguing (using the rationale one would typically use) that Obama should increase unemployment benefits. "This makes no sense," the Roman thinks to himself. "Why would you give money to someone who doesn't work for it? Why would you reward lack of virtue? Also, what's this about equality? Isn't it right that an upper class exists to rule over a lower class?" Etc.
But fortunately, between when he hopped out of the time machine and when he found this article, a rationalist found him and explained to him steelmanning and the principle of charity. "Ah, yes," he thinks. "Now I remember what the rationalist said. I was not being so charitable. I now realize that this position kind of makes sense, if you read between the lines. Giving more unemployment benefits would, now that I think about it, demonstrate the power of America to the people, and certainly Annona would approve. I don't know why whoever wrote this article didn't just come out and say that, though. Maybe they were confused".
Hopefully you can see what I'm getting at. When you regularly use the principle of charity and steelmanning, you run the risk of:
1. Sticking rigidly to a certain worldview/paradigm/established belief set, even as you find yourself willing to consider more and more concrete propositions. The Roman would have done better to really read what the modern progressive's logic was, think about it, and try to see where he was coming from than to automatically filter it through his own worldview. If he consistently does this he will never find himself considering alternative ways of seeing the world that might be better.
2. Falsely developing the sense that your worldview/paradigm/established belief set is more popular than it is. Pretty much no one today holds the same values that an ancient Roman does, but if the Roman goes around being charitable all the time then he will probably see his own beliefs reflected back at him a fair amount.
3. Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before. But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't? And if we're sure that objective, consequentialist logic is The Way To Go, then shouldn't we be very skeptical of arguments that seem like their basis is in some other reasoning system entirely?
4. Just having a poor model of people's beliefs in general, which could lead to problems.
Hopefully this made sense, and I'm sorry if this is something that's been pointed out before.