The tone of the last paragraph seems uncalled for. I doubt that a unitary "order of magnitude estimation skill" is the key variable here. To put a predictive spin on this I doubt that you'd find a very strong correlation between results in a Fermi calculation contest and estimates of the above probabilities among elite hard sciences PhD students.
Related To: Are Your Enemies Innately Evil?, Talking Snakes: A Cautionary Tale, How to Not Lose an Argument
Eliezer's excellent article Are Your Enemies Innately Evil? points to the fact that when two people have a strong disagreement it's often the case that each person sincerely believes that he or she is on the right side. Yvain's excellent article Talking Snakes: A Cautionary Tale highlights that the fact that to each such person, without knowledge of the larger context that the other person's beliefs fit into, the other person's beliefs can appear to be absurd. The frequency with which this phenomenon occurs is sufficiently high so that it's important for each participant in an argument to make a strong effort to understand where the other person is coming from and to frame one's own ideas with the other person's perspective in mind.
Last month I made a sequence of posts [1], [2], [3], [4] raising concerns about the fruitfulness of SIAI's approach to reducing existential risk. My concerns were sincere and I made my sequence of postings in good faith. All the same, there's a sense in which my sequence of postings was a failure. In the first of these I argued that the SIAI staff should place greater emphasis on public relations. Ironically, in my subsequent postings I myself should have placed greater emphasis on public relations. I made mistakes which damaged my credibility and barred me from serious consideration by some of those who I hoped to influence.
In the present posting I catalog these mistakes and describe the related lessons that I've learned about communication.
Mistake #1: Starting during the Singularity Summit
I started my string of posts during the Singularity Summit. This was interpreted by some to be underhanded and overly aggressive. In fact, the coincidence of my string of posts with the Singularity Summit was influenced more by the appearance of XiXiDu's Should I Believe What the SIAI claims? than anything else, but it's understandable that some SIAI supporters would construe the timing of my posts as premeditated and hostile in nature. Moreover, the timing of my posts did not give the SIAI staff a fair chance to respond real time. I should have avoided posting during a period of time when I knew that the SIAI staff would be occupied, waiting until a week after the Singularity Summit to begin my sequence of posts.
Mistake #2: Failing to balance criticism with praise
As Robin Hanson says in Against Disclaimers:
I don't agree with Hanson that people are wrong to presume this - I think that statistically speaking, the above presumption is correct.
For this reason, it's important to balance criticism of a group which one does not oppose with praise. I think that a number of things that SIAI staff have done have had expected favorable impacts on existential risk, even if I think other things they have done have negative expected impact. By failing to make this point salient, I mislead Airedale and others to believe that I have an agenda against SIAI.
Mistake #3: Letting my emotions get the better of me
My first pair of postings attracted considerable criticism, most of which which appeared to me to be ungrounded. I unreasonably assumed that these criticisms were made in bad faith, failing to take to heart the message of Talking Snakes: A Cautionary Tale that one's positions can appear to be absurd to those who have access to a different set of contextual data from one's own. As Gandhi said:
We're wired to generalize from one example and erroneously assume that others have the same access to the same context that we do. As such, it's natural for us to assume that when other strongly disagree with us it's because they're unreasonable people. While this is understandable, it's conducive to emotional agitation which when left unchecked typically leads to further misunderstanding.
I should have waited until I had returned to emotional equilibrium before continuing my string of postings beyond the first two. Because I did not wait until returning emotional equilibrium, my final pair of postings was less effectiveness-oriented than it should have been and more about satisfying my immediate need for self-expression. I wholeheartedly agree with a relevant quote by Eliezer from Circular Altruism:
Mistake #4: Getting personal with insufficient justification
As Eliezer has said in Politics is the Mind-Killer, it's best to avoid touching on emotionally charged topics when possible. One LW poster who's really great at this and who I look to as a role model in this regard is Yvain.
In my posting on The Importance of Self-Doubt I levied personal criticisms which many LW commentators felt uncomfortable with [1], [2], [3], [4]. It was wrong for me to make such personal criticisms without having thoroughly explored alternate avenues for accomplishing my goals. At least initially, I could have spoken in more general terms as prase did in a comment on my post - this may have sufficed to accomplish my goals without the need to discuss the sensitive subject matter that I did.
Mistake #5: Failing to share my posts with an SIAI supporter before posting
It's best to share one's proposed writings with a member of a given group before offering public criticisms of the activities of members of the said group. This gives him or her an opportunity to respond and provide context which one may be unaware of. After I made my sequence of postings, I had extensive dialogue with SIAI Visiting Fellow Carl Shulman. In the course of this dialogue I realized that I had crucial misconceptions about some of SIAI's activities. I had been unaware of some of the activities which SIAI staff have been engaging in; activities which I judge to have significant positive expected value. I had also misinterpreted some of SIAI's policies in ways that made them look worse than they now appear to me to be.
Sharing my posts with Carl before posting would have given me the opportunity to offer a more evenhanded account of SIAI's activities and would have given me the feedback needed to avoid being misinterpreted.
Mistake #6: Expressing apparently absurd views before contextualizing them
In a comment to one of my postings, I expressed very low confidence in the success of Eliezer's project. In line with Talking Snakes: A Cautionary Tale, I imagine that a staunch atheist would perceive a fundamentalist Christian's probability estimate of the truth of Christianity to be absurd and that on the flip side a fundamentalist Christian would perceive a staunch atheist's probability estimate of the truth of Christianity to be absurd. In absence of further context, the beliefs of somebody coming from a very different worldview inevitably seem absurd independently of whether or not they're well grounded.
There are two problems with beginning a conversation on a topic by expressing wildly different positions from those of one's conversation partners. One is that this tends to damage one's own credibility in one's conversation partner's eyes. The other is that doing so often carries an implicit suggestion that one's conversation partners are very irrational. As Robin Hanson says in Disagreement is Disrespect:
Extreme disagreement can come across as extreme disrespect. In line with what Yvain says in How to Not Lose an Argument, expressing extreme disagreement usually has the effect of putting one's conversation partners on the defensive and is detrimental to their ability to Leave a Line of Retreat.
In a comment on my Existential Risk and Public Relations posting Vladimir_Nesov said
I disagree with Vladimir_Nesov that changing one's apparent level of confidence is equivalent to lying. There are many possible orders in which one can state one's beliefs about the world. At least initially, presenting the factors that lead one to one's conclusion before presenting one's conclusion projects a lower level of confidence in one's conclusion than presenting one's conclusions before presenting the factors that lead one to these conclusions. Altering one's order of presentation in this fashion is not equivalent to lying and moreover is actually conducive to rational discourse.
As Hugh Ristik said in response to Reason is not the only means of overcoming bias,
I should have preceded my expression of very low confidence in the success of Eliezer's project with a careful and systematic discussion of the factors that led me to my conclusion.
Aside from my failure to give proper background for my conclusion, I also failed to be sufficiently precise in stating my conclusion. One LW poster interpreted my reference to "Eliezer's Friendly AI project" to be "the totality of Eliezer's efforts to lead to the creation of a Friendly AI." This is not the interpretation that I intended - in particular I was not including Eliezer's networking and advocacy efforts (which may be positive and highly significant) under the umbrella of "Eliezer's Friendly AI project." By "Eliezer's Friendly AI project" I meant "Eliezer's attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people." I should have made a sharper claim to avoid the appearance of overconfidence.
Mistake #7: Failing to give sufficient context for my remarks on transparency and accountability
After I made my Transparency and Accountability posting, Yvain commented
In my own mind it was clear what I meant by transparency and accountability, but my perspective is sufficiently exotic so that it's understandable that readers like Yvain would find my remarks puzzling or even incoherent. One aspect of the situation is that I share GiveWell's skeptical Bayesian prior. In A conflict of Bayesian priors? Holden Karnofsky says:
I share GiveWell's skeptical prior when it comes to the areas that GiveWell has studied most and feel that it's justified when applied to the cause of existential risk reduction to an even greater extent for the reason given by prase:
Because my own attitude toward the viability of philanthropic endeavors in general is so different from that of many LW posters, when I suggested that SIAI is insufficiently transparent and accountable, many LW posters felt that I was unfairly singling out SIAI. Statements originating from a skeptical Bayesian prior toward philanthropy are easily misinterpreted in this fashion. As Holden says:
I should have been more precise about my explicit about my Bayesian prior before suggesting that SIAI should be more transparent and accountable. This would have made it more clear that I was not singling SIAI out. Now, in the body of my original post I attempted to allude to my skeptical Bayesian prior in the body of my posting when I said :
but this statement was itself prone to misinterpretation. In particular, some LW posters interpreted it literally when I had intended "assume the worst" to be a shorthand figure of speech for "assume that things are considerably worse than they superficially appear to be." Eliezer responded by saying
I totally agree with Eliezer that literally assuming the worst is not rational. I thought that my intended meaning would be clear (because the literal meaning is obviously false), but in light of contextual cues that made it appear as though I had an agenda against SIAI my shorthand was prone to misinterpretation. I should have been precise about what my prior assumption is about charities that are not transparent and accountable, saying: "my prior assumption is that funding a given charity which is not transparent and accountable has slight positive expected value which is dwarfed by the positive expected value of funding the best transparent and accountable charities."
As Eliezer suggested, I also should have made it more clear what I consider to be an appropriate level of transparency and accountability for an existential risk reduction charity. After I read Yvain's comment referenced above, I made an attempt to explain what I had in mind by transparency and accountability in a pair of responses to him [1], [2], but I should have done this in the body of my main post before posting. Moreover, I should have preempted his remark:
by citing Holden's tentative list of questions for existential risk reduction charities.
Mistake #8: Mentioning developing world aid charities in juxtaposition with existential risk reduction
In the original version of my Transparency and Accountability posting I said
In fact, I meant precisely what I said and no more, but as Hanson says in Against Disclaimers, people presume that:
Because I did not add a disclaimer, Airedale understood me to be advocating in favor of VillageReach and StopTB over all other available options. Those who know me well know that over the past six months I've been in the process of grappling with the question of which forms of philanthropy are most effective from a utilitarian perspective and that I've been searching for a good donation opportunity which is more connected with the long-term future of humanity than VillageReach's mission is. But it was unreasonable for me to assume that my readers would know where I was coming from.
In a comment on the first of my sequence of postings orthonormal said:
From the point of view of the typical LW poster it would have been natural for me to address orthonormal's remark in my brief discussion of the relative merits of charities for those who take astronomical waste seriously and I did not do so. This led some [1], [2], [3] to question my seriousness of purpose and further contributed to the appearance that I have an agenda against SIAI. Shortly after I made my post Carl Shulman commented saying:
After reading over his comment and others and thinking about them, I edited my post to avoid the appearance of favoring developing world aid over existential risk reduction, but the damage had already been done. Based on the original text of my posting and my track record of donating exclusively VillageReach, many LW posters have persistently understood me to have an agenda in favor of developing world aid and against existential risk reduction charities.
The original phrasing of my post made sense from my own point of view. I believe supporting GiveWell's recommended charities has high expected value because I believe that doing so strengthens a culture of effective philanthropy and that in the long run this will meaningfully lower existential risk. But my thinking here is highly non-obvious and it was unreasonable for me to expect that it would be evident to readers. It's easy to forget that others can't read our minds. I damaged my credibility by mentioning developing world aid charities in juxtaposition with existential risk reduction without offering careful explanation for why I was doing so.
My reference to developing world aid charities was also not effectiveness-oriented. As far as I know, most SIAI donors are not considering donating to developing world aid charities. As described under the heading "Mistake #3" above, I slipped up and let my desire for personal expression take precedence over actually getting things done. As I described in Missed Opportunities For Doing Well By Doing Good I personally had a great experience with discovering GiveWell and giving to VillageReach. Instead of carefully taking the time to get to know my audience, I simple-mindedly generalized from one example and erroneously assumed that my readers would be coming from a perspective similar to my own.
Conclusion:
My recent experience has given me heightened respect for the careful writing style of LW posters like Yvain and Carl Shulman. Writing in this style requires hard work and the ability to delay gratification, but it can happen that the cost is well worth it in the end. When one is writing for an audience that one doesn't know very well there's a substantial risk of being misinterpreted because one's readers do not have enough context to understand what one is driving at. This risk can be mitigated by taking the time to provide detailed background for one's readers and by taking great care to avoid making claims (whether explicit or implicit) that are too strong. In principle one can always qualify one's remarks later on, but it's important to remember that as komponisto said
so that it's preferable to avoid being misunderstood the first time around. On the flip side it's important to remember that one may be misguided by one's own first impressions. There are LW posters who I now understand to be acting in good faith who I initially misunderstood to have a hostile agenda against me.
My recent experience was my first writing about a controversial subject in public and has been a substantive learning experience for me. I would like to thank the Less Wrong community for giving me this opportunity. I'm especially grateful to posters CarlShulman, Airedale, steven0461, Jordan, Komponisto, Yvain, orthonormal, Unknowns, Wei_Dai, Will_Newsome, Mitchell_Porter, rhollerith_dot_com, Eneasz, Jasen and PeerInfinity for their willingness to engage with me and help me understand why some of what I said and did was subject to misinterpretation. I look forward to incorporating the lessons that I've learned into my future communication practices.