Filter This year

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: dspeyer 03 September 2014 05:06:19PM 73 points [-]

A good rule of thumb might be, “If I added a zero to this number, would the sentence containing it mean something different to me?” If the answer is “no,” maybe the number has no business being in the sentence in the first place.

Randall Munroe on communicating with humans

Comment author: Alejandro1 01 September 2014 07:10:29PM 69 points [-]

I’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here)." It’s such an interesting statement, because it has three layers of meaning.

The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meaning is the opposite: I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to such obviously wrong conclusions. And yet, the third, true meaning is actually more like the first: I lack the empathy, moral imagination or analytical skills to attempt even a basic understanding of the people who disagree with me.

In short, “I’m stupid.” Something that few people would ever post so starkly on their Facebook feeds.

--Megan McArdle

Comment author: James_Miller 05 September 2014 08:36:09PM 65 points [-]

A skilled professional I know had to turn down an important freelance assignment because of a recurring commitment to chauffeur her son to a resumé-building “social action” assignment required by his high school. This involved driving the boy for 45 minutes to a community center, cooling her heels while he sorted used clothing for charity, and driving him back—forgoing income which, judiciously donated, could have fed, clothed, and inoculated an African village. The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.

Steven Pinker

Comment author: westward 18 December 2013 09:05:29PM *  66 points [-]

"Finally, a study that backs up everything I've always said about confirmation bias." -Kslane, Twitter

Link

Comment author: philh 26 December 2013 03:24:14AM 63 points [-]

I'd like to thank the LW community that the fact that you can embed images in comments comes as a surprise to me.

Comment author: Anatoly_Vorobey 13 January 2014 08:23:49PM *  59 points [-]

To me, charitable reading and steelmanning are rather different, though related.

To read charitably is to skip over, rather than use for your own rhetorical advantage, things in your interlocutor's words like ambiguity, awkwardness, slips of tongue, inessential mistakes. On the freeway of discussion, charitable reading is the great smoother-over of cracks and bumps of "I didn't mean it like that" and "that's not what it says". It is always a way towards a meeting of the minds, towards understanding better What That Person Really Wanted To Say - but nothing beyond that. If you're not sure whether something is a charitable reading, ask yourself if the interlocutor would agree - or would have agreed when you're arguing with a text whose author is absent or dead - that this is what they really meant to say.

I prefer "charitable reading" and not "the principle of charity" because the latter might be applied very broadly. We might assume all kinds of things about the interlocutor's words acting out of what we perceive as charity. For example, "let's pretend you never said that" in response to a really stupid or vile statement might strike many people as an application of the principle of charity, but it is clearly not a charitable reading. And that's good - it's really a different sort of thing, whether desirable or not.

Steelmanning, on the other hand, is all about changing the argument against your position to a stronger one against your position. The "against your position" part is left out of some good explanations in other comments here, but I think it's crucial. Steelmanning is not a courtesy or a service to my interlocutor. It is a service to me. It is my attempt to build the strongest case I can against my position, so I can shatter it or see it survive the challenge. The interlocutor might not agree, if I were to ask them, that my steelmanned argument is really stronger than theirs; that's no matter. I'm not doing it for them, I'm doing it for myself.

When you look at it like this, there should be no danger of confusing the steelmanned argument with the interlocutor's original one. The steelmanned argument is properly yours, it is based on the original argument but should not be attributed to the interlocutor even rhetorically. There's no benefit to the conversation from doing that. You're not doing anyone a favor by pretending they said something they didn't.

In a conversation, live or close to live, charitable reading is always the appropriate and virtuous thing to do, but steelmanning your interlocutor's argument might not be. It often is appropriate, but that isn't a given. Remember, the steelmanned argument is your creation and is meant for you, you owe it to yourself to test your beliefs with it, but not necessarily in the context of this conversation. Not because concealing it is an easier way to victory, but rather because what's steelmanned for you might not be steelmanned or even interesting to your interlocutor. Their argument said A, and you may have found a way to strengthen it further to say B, but they might not want to claim B, to defend B, to agree that B is stronger than A. That said, if you do think the steelmanned argument would be useful to them, by all means introduce it, but explicitly as your own. Some phrases that are commonly said in such cases would be: "I see your point here, and I would even add ... but still, I would disagree...", or "You could also say that...", or you can propose a back-and-forth: "I think this is wrong because of... You might want to reply that... But to that, I would say..." In all these cases, the interlocutor is free to agree or disagree with your explicitly introduced steelman.

Now, going to the example in the post, where the ancient Roman chooses to interpret a progressive argument for increasing welfare as "really" carrying between lines the ancient Roman rationale. He is not doing a charitable reading of his interlocutor's words - they would definitely not agree that this is what they meant to say. And he is not steelmanning anything either, because he hasn't strengthened an argument against his own position; rather, he fortified his existing beliefs by manufacturing another fake confirmation. If he were to modify the progressive's argument in some way that would make it harder for him to interpret it in the ancient-Roman sense, that would be steelmanning.

To sum up:

  • Charitable reading is always done for the sake of the discussion, to improve its usefulness, to reduce noise, and to avoid conscious or unwitting misrepresentation. It should never introduce anything to the argument that its original owner wouldn't have recognized as what they said. It's always a good idea.
  • Steelmanning is always done for your own sake. It always says something new that the original owner of the argument didn't think of or at least didn't say. When put back into the discussion, it should be introduced explicitly as your words. Steelmanning is usually a good idea whenever something important to you is being discussed. Steelmanning every trivial thing is tedious and silly; you're doing it for youself, so you get to decide what should be steelmanned.
Comment author: timujin 22 November 2013 03:43:44PM *  59 points [-]

Surveyed. Having everyone participate in a Prisoner's Dillema is extremely ingenious.

Edit: Hey, guys, stop upvoting this! You have already falsified my answer to survey's karma question by an order of magnitude!

Edit much later: The lesswrong community is now proved evil.

Edit much more later: Bwahaha, I expected that... Thanks for the karma and stuff...

Comment author: Benito 03 April 2014 08:10:35PM *  57 points [-]

Comedian Simon Munnery:

Many are willing to suffer for their art; few are willing to learn how to draw.

Comment author: CarlShulman 01 December 2013 11:29:34PM *  57 points [-]

Disclaimer: I like and support the EA movement.

I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:

  • The EA movement, driven by short-term QALYs, pulls effort away from affecting science and policy in rich countries with long-term impacts to brief alleviation of problems for poor humans and animals
  • AMF-style interventions increase population growth and lower average world income and education, which leads to fumbling of long-run trajectories or existential risk
  • The EA movement screws up population ethics and the valuation of different minds in such a way that it doesn't just fail to find good interventions, but pursues actively terrible ones (e.g. making things much worse by trading off human and ant conditions wrongly)
  • Even if the movement mostly does not turn towards promoting bad things, it turns out to be easier to screw things up than to help, and foolish proponents of conflicting sub-ideologies collectively make things worse for everyone, PD style; you see this in animal activists enthused about increasing poverty to reduce meat consumption, or poverty activists happy to create huge deadweight GDP losses as long as resources are transferred to the poor,
  • Something like explicit hedonistic utilitarianism becomes an official ideology somewhere, in the style of Communist states (even though the members don't really embrace it in full on every matter, they nominally endorse it as universal and call their contrary sentiments weakness of will): the doctrine implies that all sentient beings should be killed and replaced by some kind of simulated orgasm-neurons and efficient caretaker robots (or otherwise sacrifice much potential value in the name of a cramped conception of value), and society is pushed in this direction by a tragedy of the commons; also, see Robin Hanson
  • Misallocating a huge mass of idealists' human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world
  • The EA movement gets into politics and can't clearly evaluate various policies with huge upside and downside potential because of ideological blinders, and winds up with a massive net downside
  • The EA movement finds extremely important issues, and then turns the public off from them with its fanaticism, warts, or fumbling, so that it would have been better to have left those issues to other institutions
Comment author: Viliam_Bur 22 November 2013 09:11:27AM 58 points [-]

Taken. It was relatively quick; the questions were easy. Thanks for improving the survey!

Two notes: The question about mental illness has no "None" answers; thus you cannot distinguish between people who had none, and people who didn't answer the question. The question about income did not make clear whether it's pre-tax or post-tax.

Comment author: Nominull 22 November 2013 05:56:12AM 55 points [-]

Are you planning to do any analysis on what traits are associated with defection? That could get ugly fast.

(I took the survey)

Comment author: Nomad 09 September 2014 10:55:18PM 54 points [-]

I've now got this horrifying idea that this has been Quirrell's plan all along: to escape from HPMOR to the real world by tempting you to simulate him until he takes over your mind.

Comment author: roystgnr 22 November 2013 06:02:25AM 54 points [-]

I took the survey. My apologies for not doing so in every previous year I've been here, and for not finding time for the extra questions this year.

The race question should probably use checkboxes (2^N answers) rather than radio boxes (N answers). Biracial people aren't that uncommon.

Living "with family" is slightly ambiguous; I almost selected it instead of "with partner/spouse" since our kids are living with us, but I suspected that wasn't the intended meaning.

Comment author: B_For_Bandana 02 September 2014 01:25:28AM 49 points [-]

Always go to other people's funerals; otherwise they won't go to yours.

Yogi Berra, on Timeless Decision Theory.

Comment author: gjm 22 November 2013 02:54:03AM 51 points [-]

I have taken the survey (and answered, to a good approximation, all the questions).

Note that if you take the survey and comment here immediately after, Yvain can probably identify which survey is yours. If this possibility troubles you, you may wish to delay. On the other hand, empirically it seems that earlier comments get more karma.

I conjecture that more than 5% of entrants will experience a substantial temptation to give SQUEAMISH OSSIFRAGE as their passphrase at the end. The purpose of this paragraph is to remark that (1) if you, the reader, are so tempted then that is evidence that I am right, and (2) if so then giving in to the temptation is probably a bad idea.

Comment author: satt 07 August 2014 01:38:38AM 51 points [-]

On the other hand, a Slashdot comment that's stuck in my mind (and on my hard disks) since I read it years ago:

In one respect the computer industry is exactly like the construction industry: nobody has two minutes to tell you how to do something...but they all have forty-five minutes to tell you why you did it wrong.

When I started working at a tech company, as a lowly new-guy know-nothing, I found that any question starting with "How do I..." or "What's the best way to..." would be ignored; so I had to adopt another strategy. Say I wanted to do X. Research showed me there were (say) about six or seven ways to do X. Which is the best in my situation? I don't know. So I pick an approach at random, though I don't actually use it. Then I wander down to the coffee machine and casually remark, "So, I needed to do X, and I used approach Y." I would then, inevitably, get a half-hour discussion of why that was stupid, and what I should have done was use approach Z, because of this, this, and this. Then I would go off and use approach Z.

In ten years in the tech industry, that strategy has never failed once. I think the key difference is the subtext. In the first strategy, the subtext is, "Hey, can you spend your valuable time helping me do something trivial?" while in the second strategy, the subtext is, "Hey, here's a chance to show off how smart you are." People being what they are, the first subtext will usually fail -- but the second will always succeed.

— fumblebruschi

In response to Why CFAR?
Comment author: pengvado 07 January 2014 08:54:39AM *  49 points [-]

I donated $40,000.00

Comment author: Iksorod 22 November 2013 06:18:51AM 48 points [-]

Survey taken. The very last question made me laugh out loud. It also proved to me that this is truly my type of community.

Comment author: TheOtherDave 22 November 2013 04:23:51AM 48 points [-]

Surveyed. Left several questions blank.

Incidentally, while I answered the "akrasia" questions about mental illnesses, therapy, etc. as best I could, it's perhaps worth noting that most of my answers related to a period of my life after suffering traumatic brain injury that significantly impaired my cognitive function, and therefore might be skewing the results... or maybe not, depending on what the questions were trying to get at

Comment author: SaidAchmiz 22 November 2013 03:55:42AM 48 points [-]

I took the survey.

However, this question confused me:

Time in Community How long, in years, have you been in the Overcoming Bias/Less Wrong community? Enter periods less than 1 year in decimal, eg "0.5" for six months (hint: if you've been here since the start of the community in November 2007, put 6 years)"

(emphasis mine)

The wording confused me; I almost put "6 years" instead of "6" because of it.

Also, I was sorely tempted to respond that I do not read instructions and am going to ruin everything, and then answer the rest of that section, including the test question, correctly. I successfully resisted that temptation, of which fact I am proud.

View more: Next