Existential Risk and Public Relations

36 Post author: multifoliaterose 15 August 2010 07:16AM

[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is "yes, by talking about it." But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.

As Yvain has discussed in his excellent article titled The Trouble with "Good"

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form "Boo for Person X's claims!"  If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type "Boo for existential risk reduction!"

The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

[...]

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

[...]

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm very disappointed that Eliezer has made statements such as:

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don't know of any other person who could do that...

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status.  I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme.  This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

Comments (613)

Sort By: Controversial
Comment author: Emile 15 August 2010 04:52:22PM 0 points [-]

My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.

Seems like a reasonable position to me.

An important part of existential risk reduction is making sure that people who are likely to work on AI, or fund it, have read the sequences, and are at least aware of how most possible minds are not minds we would want, and of how dangerous recursive self-improvement could be.

Comment author: Thomas 15 August 2010 09:49:00AM 1 point [-]

Just take the best of anybody and discard the rest. Yudkowsky has some very good points (about 80% of his writings, by my view) - take them and say thank you.

When he or the SIAI missed the point, to put it mildly, you know it better anyway, don't you?

Comment author: multifoliaterose 15 August 2010 10:36:49AM 7 points [-]

I agree that Yudkowsky has some very good points.

My purpose in making the top level post is as stated: to work against poisoning the meme.

Comment author: Thomas 15 August 2010 10:48:21AM 0 points [-]

My point is, that you can "poise the meme" only for the fools. A wise individual can see for zerself what to pick and what not to pick.

Comment author: Jonathan_Graehl 16 August 2010 10:06:00PM -1 points [-]

Admirable pluck.

But expressing strength doesn't make people strong, or strength free.

Comment author: rabidchicken 17 August 2010 06:17:31AM *  0 points [-]

Come on... Who does not love being a social outcast? I made a decision when I was about 12 that rather then trying to conform to other people's expectations of me, I was going to do / express support for exactly what I thought made sense, even if something I supported was related to something I could not, and then get to know people who seemed to be making similar decisions. Its arrogant and has numerous flaws, but it has generally worked for me. Social status and popularity are overrated, compared to the benefits of meeting a large number of people you can interact with freely.

Comment author: KrisC 17 August 2010 06:30:07AM 1 point [-]

This works fine as long as you don't find yourself operating within a hierarchy.

Comment author: timtyler 15 August 2010 07:35:45AM *  3 points [-]

The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.

As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.

Comment author: multifoliaterose 15 August 2010 08:06:45AM *  6 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you're withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.

Comment author: timtyler 15 August 2010 11:36:55AM *  2 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI.

Right - but that's only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.

Comment author: JRMayne 15 August 2010 04:25:58PM 6 points [-]

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they and only they can accomplish the task.

I also don't care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I could be wrong.

--JRM

Comment author: Eliezer_Yudkowsky 18 August 2010 03:03:07PM 0 points [-]

And saddened once again at how people seem unable to distinguish "multi claims that something Eliezer said could be construed as claim X" and "Eliezer claimed X!"

Please note that for the next time you're worried about damaging an important cause's PR, multi.

Comment author: JRMayne 18 August 2010 04:52:19PM 11 points [-]

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.

Number 4 is unnecessary for your being the most important person on earth, but:

  1. People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.

And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.

And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the cause - he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated "Eliezer Yudkowsky," with "AI existential risk."

Again.

I might be wrong about my interpretation - but I don't think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn't in the collective ignorance and stupidity in other people. Just a thought.

--JRM

Comment author: JGWeissman 18 August 2010 06:39:40PM 7 points [-]

Which of those statements do you reject?

Comment author: multifoliaterose 18 August 2010 04:08:02PM 9 points [-]

My understanding of JRMayne's remark is that he himself construes your statements in the way that I mentioned in my post.

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

Comment author: XiXiDu 18 August 2010 03:23:21PM *  8 points [-]

I have to disagree based on the following evidence:

Q: The only two legitimate occupations for an intelligent person in our current world? (Answer)

and

"At present I do not know of any other person who could do that." (Reference)

This makes it reasonable to state that you think you might be the most important person in the world.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:26:54PM 0 points [-]

I love that "makes it reasonable" part. Especially in a discussion on what you shouldn't say in public.

Now we're to avoid stating any premises from which any absurd conclusions seem reasonable to infer?

This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu's part, not necessarily something the average audience member would do.

Note that saying "But you must therefore argue X..." where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.

Comment author: XiXiDu 18 August 2010 03:32:24PM 3 points [-]

I'm too dumb to grasp what you just said in its full complexity. But I believe you are indeed one of the most important people in the world. Further, (1) I don't see what is wrong with that (2) It is positive for public relations as it attracts people to donate money (Evidence: Jesus) (3) It won't hurt academic relations as you are always able to claim that you were misunderstood.

Comment author: JRMayne 18 August 2010 04:59:12PM 8 points [-]

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?

I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.

Comment author: JGWeissman 18 August 2010 06:45:20PM 2 points [-]

In this case I would ask you if you really want Joe jailed, or if when you said that "All heathens should be jailed", you were using the word "heathen" in a stronger sense of explicitly rejecting the "One True God" than the weak sense that Joe is a "heathen" for not understanding the concept.

And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.

Comment author: nhamann 15 August 2010 05:33:46PM 6 points [-]

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I'd like to point out that it's not either/or: it's possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.

Comment author: [deleted] 15 August 2010 02:46:09PM 13 points [-]

I am one of those who haven't been convinced by the SIAI line. I have two main objections.

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.

Comment author: orthonormal 15 August 2010 03:46:57PM 10 points [-]

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies.

As was mentioned in other threads, SIAI's main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that's a much stronger claim than one very detailed scenario.

For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever 'recipe'– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.

Comment author: Jonathan_Graehl 16 August 2010 09:51:13PM 1 point [-]

shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

Yes. It's hardly urgent, since AI researchers are nowhere near a runaway intelligence. But on the other hand, control of AI is going to be crucial+difficult eventually, and it would be good for researchers to be aware of it, if they aren't.

Comment author: LucasSloan 19 August 2010 05:09:59AM 3 points [-]

It's hardly urgent, since AI researchers are nowhere near a runaway intelligence.

Sadly, there's no guarantee of that.

Comment author: Jonathan_Graehl 19 August 2010 08:21:44AM *  2 points [-]

Right, it's just (in my and most other AI researchers'[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it's interesting to me that I don't feel there's that much difference in probability of "(good enough to) run away improving itself quickly past human level AI" in the next year, and in the next 10 years - both extremely close to 0 is the most specific I can be at this point. That suggests I haven't really quantified my beliefs exactly yet.

[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.

Comment author: ciphergoth 16 August 2010 06:14:01PM *  5 points [-]

There needs to be an article on this point. In the absence of a really good way of deciding what technologies are likely to be developed, you are still making a decision. You haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed, so I'd like to see your prediction mechanism and whether it's worked in the past. In the absence of really good information, we sometimes have to decide on the information we have.

EDIT: I was thinking about cryonics when I wrote this, though the argument generalizes.

Comment author: [deleted] 16 August 2010 11:29:49PM 0 points [-]

My point, with this, is that everybody is risk-averse and everybody has a time preference. The less is known about the prospects of a future technology, the less willing people are to invest resources into ventures that depend on the future development of that technology. (Whether to take advantage of the technology -- as in cryonics -- or to mitigate its dangers -- as in FAI.) Also, the farther in the future the technology is, the less people care about it; we're not willing to spend much to achieve benefits or forestall risks in the far future.

I don't think it's reasonable to expect people to change these ordinary features of economic preference. If you're going to ask people to chip in to your cause, and the time horizon is too far, or the uncertainty too high, they're not going to want to spend their resources that way. And they'll be justified.

Note: yes, there ought to be some magnitude of benefit or cost that overcomes both risk aversion and time preference. Maybe you're going to argue that existential risk and cryonics are issues of such great magnitude that they outweigh both risk aversion and time preference.

But: first of all, the importance of the benefit or cost is also an unknown (and indeed subjective.) How much do you value being alive? And, second of all, nobody says our risk and time preferences are well-behaved. There may be a date so far in the future that I don't care about anything that happens then, no matter how good or how bad. There may be loss aversion -- an amount of money that I'm not willing to risk losing, no matter how good the upside. I've seen some experimental evidence that this is common.

Comment author: wedrifid 17 August 2010 05:06:43AM 3 points [-]

My point, with this, is that everybody is risk-averse and everybody has a time preference.

From what I understand this applies to most people but not everyone, especially outside of contrived laboratory circumstances. Overconfidence and ambition essentially amount to risk-loving choices for some major life choices.

Comment author: timtyler 16 August 2010 06:22:36PM *  0 points [-]

you haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed

What is it that is making you think that whatever SarahC hasn't "signed up" to is having a positive effect - and that she can't do something better with her resources?

Comment author: John_Maxwell_IV 16 August 2010 12:24:34AM *  5 points [-]

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Let's keep in mind that your estimated probabilities of various technological advancements occurring and your level of confidence in those estimates are completely distinct... In particular, here you seem to express low estimated probabilities of various advancements occurring, and you justify this by saying "we really have no idea". This seems like a complete non sequitur. Maybe you have a correct argument in your mind, but you're not giving us all the pieces.

Comment author: [deleted] 16 August 2010 12:37:09AM 6 points [-]
  1. Technology X is likely to be developed in a few decades.
  2. Technology X is risky.
  3. We must take steps to mitigate the risk.

If you haven't demonstrated 1 -- if it's still unknown -- you can't expect me to believe 3. The burden of proof is on whoever's asking for money for a new risk-mitigating venture, to give strong evidence that the risk is real.

Comment author: Aleksei_Riikonen 16 August 2010 01:35:58AM *  4 points [-]

So you think a danger needs to likely arrive in a few decades for it to merit attention?

I think that is quite irresponsible. No law of physics states that all problems can certainly be solved very well in a few decades (the solutions for some problems might even necessarily involve political components, btw), so starting preparations earlier can be necessary.

Comment author: John_Maxwell_IV 16 August 2010 01:03:42AM *  2 points [-]

I see "burden of proof" as a misconcept in the same way that someone "deserving" something is. A better way of thinking about this: "You seem to be making a strong claim. Mind sharing the evidence for your claim for me? ...I disagree that the evidence you present justifies your claim."

For what it's worth, I also see "must _" as a misconcept--although "must _ to _" is not. It's an understandable usage if the "to _*" clause is implicit, but that doesn't seem true in this case. So to fix up SIAI's argument, you could say that these are the statements whose probabilities are being contested:

  1. If SarahC takes action Y before the development of Technology X and Technology X is developed, the expected value of her action will exceed its cost.
  2. Technology X will be developed.

And depending on their probabilities, the following may or may not be true:

  • SarahC wants to take action Y.

Pretty much anything you say that's not relevant to one of statements 1 or 2 (including statements that certain people haven't been "responsible" enough in supporting their claims) is completely irrelevant to the question of whether you want to take action Y. You already have (or ought to be able to construct) probability estimates for each of 1 and 2.

Comment author: Perplexed 16 August 2010 01:53:42AM 1 point [-]

Your grasp of decision theory is rather weak if you are suggesting that when Technology X is developed is irrelevant to SarahC's decision. Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

But your real point was not to set up a correct decision problem, but rather to suggest that her questions about whether "certain people" have been "responsible" are irrelevant. Well, I have to disagree. If action Y is giving money to "certain people", then their level of "responsibility" is very relevant.

I did enjoy your observations regarding "burden of proof" and "must", though probably not as much as you did.

Comment author: John_Maxwell_IV 16 August 2010 02:33:08AM 1 point [-]

Your grasp of decision theory is rather weak if you are suggesting that when Technology X is developed is irrelevant to SarahC's decision.

Of course that is important. I didn't want to include a lot of qualifiers.

I'm not trying to make a bulletproof argument so much as concisely give you an idea of why I think SarahC's argument is malformed. My thinking is that should be enough for intellectually honest readers, as I don't have important insights to offer beyond the concise summary. If you think I ought to write longer posts with more qualifications for readers who aren't good at taking ideas seriously feel free to say that.

Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

Really? So in some circumstances it is rational to take an action for which the expected cost is greater than the expected value? Or it is irrational to take an action for which the expected value exceeds the expected cost? (I'm using "rational" to mean "expected utility maximizing", "cost" to refer to negative utility, and "value" to refer to positive utility--hopefully at this point my thought process is transparent.)

If action Y is giving money to "certain people", then their level of "responsibility" is very relevant.

It would be a well-formed argument to say that because SIAI folks make strong claims without justifying them, they won't use money SarahC donates well. As far as I can tell, SarahC has not explicitly made that argument. (Recall I said that she might have a correct argument in her mind but she isn't giving us all the pieces.)

I did enjoy your observations regarding "burden of proof" and "must", though probably not as much as you did.

Please no insults, this isn't you versus me is it?

Comment author: Perplexed 16 August 2010 02:53:27AM 1 point [-]

Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

Really? So in some circumstances it is rational to take an action for which the expected cost is greater than the expected value?

No, your error was in the other direction. If you look back carefully, you will notice that the ratio is being calculated conditionally on Technology X being developed. Given that the cost is sunk regardless of whether the technology appears, it is possible that SarahC should not act even though the (conditionally) expected return exceeds the cost.

Please no insults, this isn't you versus me is it?

Shouldn't be. Nor you against her. I was catty only because I imagined that you were being catty. If you were not, then I surely apologize.

Comment author: NancyLebovitz 15 August 2010 11:17:33PM 5 points [-]

Prediction is hard, especially about the future.

One thing that intrigues me is snags. Did anyone predict how hard to would be to improve batteries, especially batteries big enough for cars?

Comment author: nhamann 15 August 2010 06:22:22PM *  6 points [-]

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I'm not sure what you refer to by "actual AI." There is a sub-field of academic computer science which calls itself "Artificial Intelligence," but it's not clear that this is anything more than a label, or that this field does anything more than use clever machine learning techniques to make computer programs accomplish things that once seemed to require intelligence (like playing chess, driving a car, etc.)

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

Comment author: [deleted] 15 August 2010 06:59:28PM 3 points [-]

Yes, the subfield of computer science is what I'm referring to.

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it. A machine that drives a car is doing one of the things a human mind does; it may, in some cases, do it through a process that's structurally similar to the way the human mind does it. It seems to me that machines that can do these simple cognitive tasks are the best source of evidence we have today about hypothetical future thinking machines.

Comment author: nhamann 15 August 2010 08:10:18PM 5 points [-]

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it.

I gave the wrong impression here. I actually think that machine learning might be a good framework for thinking about how parts of the brain work, and I am very interested in studying machine learning. But I am skeptical that more than a small minority of projects where machine learning techniques have been applied to solve some concrete problem have shed any light on how (human) intelligence works.

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Comment author: JoshuaZ 15 August 2010 10:31:10PM *  4 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Comment author: Simulation_Brain 18 August 2010 06:20:49AM 3 points [-]

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Comment author: timtyler 18 August 2010 06:31:35AM *  2 points [-]

Machine intelligence has surpassed "human level" in a number of narrow domains. Already, humans can't manipulate enough data to do anything remotely like a search engine or a stockbot can do.

The claim seems to be that in narrow domains there are often domain-specific "tricks" - that wind up not having much to do with general intelligence - e.g. see chess and go. This seems true - but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.

Those who make a big deal about the distinction between their projects and "mere" expert systems are probably mostly trying to market their projects before they are really experts at anything.

One of my videos discusses the issue of whether the path to superintelligent machines will be "broad" or "narrow":

http://alife.co.uk/essays/on_general_machine_intelligence_strategies/

Comment author: komponisto 15 August 2010 10:19:50PM 11 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.

At first glance this may seem puzzling, since, given how much more attention is given to narrow AI by researchers, you might think that someone who believes AGI is "fundamentally different" from narrow AI might be more pessimistic about the prospect of AGI coming soon than someone (like me) who is inclined to suspect that the difference is essentially quantitative. The explanation, however, is that (from what I can tell) the former belief leads Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

This -- much more than all the business about fragility of value and recursive self-improvement leading to hard takeoff, which frankly always struck me as pretty obvious, though maybe there is hindsight involved here -- is the area of Eliezer's belief map that, in my opinion, could really use more public, explicit justification.

Comment author: Vladimir_Nesov 15 August 2010 10:38:55PM 1 point [-]

Note that allowing for a possibility of sudden breakthrough is also an antiprediction, not a claim for a particular way things are. You can't know that no such thing is possible, without having understanding of the solution already at hand, hence you must accept the risk. It's also possible that it'll take a long time.

Comment author: jacob_cannell 25 August 2010 02:56:41AM 1 point [-]

I'm reading through and catching up on this thread, and rather strongly agreed with your statement:

Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

However, pondering it again, I realize there is an epistemological spectrum ranging from math on the one side to engineering on the other. Key insights into new algorithms can undoubtedly speed up progress, and such new insights often can be expressed as pure math, but at the end of the day it is a grand engineering (or reverse engineering) challenge.

However, I'm somewhat taken aback when you say, "the notion that AGI is only decades away, as opposed to a century or two."

A century or two?

Comment author: nhamann 16 August 2010 03:48:56AM *  3 points [-]

I don't think AGI in a few decades is very farfetched at all. There's a heckuvalot of neuroscience being done right now (the Society for Neuroscience has 40,000 members), and while it's probably true that much of that research is concerned most directly with mere biological "implementation details" and not with "underlying algorithms" of intelligence, it is difficult for me to imagine that there will still be no significant insights into the AGI problem after 3 or 4 more decades of this amount of neuroscience research.

Comment author: komponisto 16 August 2010 04:53:11AM *  3 points [-]

Of course there will be significant insights into the AGI problem over the coming decades -- probably many of them. My point was that I don't see AGI as hard because of a lack of insights; I see it as hard because it will require vast amounts of "ordinary" intellectual labor.

Comment author: timtyler 16 August 2010 06:28:37AM 2 points [-]

...but you don't really know - right?

You can't say with much confidence that there's no AIXI-shaped magic bullet.

Comment author: jacob_cannell 25 August 2010 03:14:50AM *  -1 points [-]

AIXI-shaped magic bullet?

AIXI's contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the 'math' of choice vs computational complexity theory, which is the proper domain.

The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn't subscribe to SIAI's math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.

This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.

I explore this a little more here

Comment author: timtyler 25 August 2010 05:51:55AM 0 points [-]

right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

That seems like crazy talk to me. The brain is not optimal - not its hardware or software - and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units - and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.

Comment author: komponisto 16 August 2010 07:38:22AM *  2 points [-]

That's right; I'm not an expert in AI. Hence I am describing my impressions, not my fully Aumannized Bayesian beliefs.

Comment author: nhamann 16 August 2010 06:10:36AM 9 points [-]

I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.

I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).

If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.

Comment author: komponisto 16 August 2010 07:35:04AM 7 points [-]

Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor,

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)

Actually, think of math problems if you like. Surely there are conjectures in existence now -- probably some of them already famous -- that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn't my impression -- indeed, it looks to me more analogous to problems that are considered "hopeless", like the "problem" of classifying all groups, say.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:36:25PM 10 points [-]

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)

Comment author: Daniel_Burfoot 18 August 2010 06:03:12PM 4 points [-]

but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time.

I think the following quote is illustrative of the problems facing the field:

After [David Marr] joined us, our team became the most famous vision group in the world, but the one with the fewest results. His idea was a disaster. The edge finders they have now using his theories, as far as I can see, are slightly worse than the ones we had just before taking him on. We've lost twenty years.

-Marvin Minsky, quoted in "AI" by Daniel Crevier.

Some notes and interpretation of this comment:

  • Most vision researchers, if asked who is the most important contributor to their field, would probably answer "David Marr". He set the direction for subsequent research in the field; students in introductory vision classes read his papers first.
  • Edge detection is a tiny part of vision, and vision is a tiny part of intelligence, but at least in Minsky's view, no progress (or reverse progress) was achieved in twenty years of research by the leading lights of the field.
  • There is no standard method for evaluating edge detector algorithms, so it is essentially impossible to measure progress in any rigorous way.

I think this kind of observation justifies AI-timeframes on the order of centuries.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:31:02PM 3 points [-]

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

You have to know some of their math (some of it is interesting, some not) but this does not require getting on the phone with them and asking them to explain their math, to which of course they would tell to you to RTFM instead of calling them.

Comment author: pnrjulius 12 June 2012 02:58:57AM 0 points [-]

Basically, we need a PR campaign. It needs to be tightly focused: Just existential risk, don't try to sell the whole worldview at once (keep inferential distance in mind). Maybe it shouldn't even be through SIAI; maybe we should create a separate foundation called The Foundation to Reduce Existential Risk (or something). ("What do you do?" "We try to make sure the human race is still here in 1000 years. Can we interest you in our monthly donation plan?")

And if our PR campaign even slightly reduces the chances of a nuclear war or an unfriendly AI, it could be one of the most important things anyone has ever done.

Who do we know who has the resources to make such a campaign?

Comment author: Jonathan_Graehl 16 August 2010 11:12:05PM 0 points [-]

The discussion reassures me that EY is not, for anyone here, a cult leader.

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

Comment author: rabidchicken 17 August 2010 06:22:31AM 0 points [-]

EY is not a cult leader, he is a Lolcat herder.

Comment author: wedrifid 17 August 2010 06:37:50AM 0 points [-]

You have not behaved like a troll thus far, some of your contributions have been useful. Please don't go down that path now.

Comment author: rabidchicken 17 August 2010 09:41:28PM 1 point [-]

That was a useless and stupid thing to say even if I am a troll, my apologies.

Comment author: jsalvatier 20 August 2010 08:15:26PM 1 point [-]

I am confused: his comment reads like a joke, how is that trollish? I smiled.

Comment author: wedrifid 17 August 2010 05:11:22AM 7 points [-]

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

To not advocate that would seem to set them up for attacks on their understanding of economics.

I suggest "and the SIAI is the marginally most efficient utility generator" is the one that opens them up to attacks. (I'm not saying that they shouldn't make that claim.)

Comment author: ciphergoth 17 August 2010 07:28:17AM 7 points [-]

In a saner world every charity would claim this. Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:16:39PM 4 points [-]

In a sane world where everyone had the same altruistic component of their values, the marginal EU of all utilities would roughly balance up to the cost of discriminating them more closely. I'd have to think about what would happen if everyone had different altruistic components of their values; but if large groups of people had the same values, then there would exist some class of charities that was marginally balanced with respect to those values, and people from that group would expend the cost to pick out a member of that class but then not look too much harder. If everyone who works for a charity is optimistic and claims that their charity alone is the most marginally efficient in the group, that raises the cost of discriminating among them and they will become more marginally unbalanced.

Comment author: ciphergoth 18 August 2010 03:40:41PM 3 points [-]

This more detailed analysis doesn't I think detract from my main point: in broad terms, it's not weird that SIAI claim to be the most efficient way to spend altruistically, it's weird that all charities don't claim this.

Comment author: Eliezer_Yudkowsky 18 August 2010 04:26:34PM 1 point [-]

I agree with your main point and was refining it.

Comment author: Vladimir_Nesov 17 August 2010 07:59:32PM *  5 points [-]

Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Many charities could have close marginal worth, and rational allocation of resources would keep them that way. A charity that is less efficient could still perform a useful function, merely needing a decrease in funding, and not disbanding.

And you can't have statically super-efficient charities either, because marginal worth decreases with more funding. For example, a baseline of hundred million dollars SIAI yearly budget might drive marginal efficiency of a dollar donation lower than of other causes.

Comment author: Larks 17 August 2010 06:22:19AM *  3 points [-]

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

The belief that SIAI is the best utility generator may be incorrect, but you can't criticise someone from SIAI for making it beyond criticising them for being at SIAI, a criticism that no-one seems to make.

Comment author: wedrifid 17 August 2010 06:32:06AM 3 points [-]

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

Technically not true.SIAI could actually be the optimal way for them specifically to generate utility while at the same time being not the optimal place for people to donate. For example, they could use their efforts to divert charitable donations from even worse sources to themselves and then pass it on to Givewell.

Comment author: Larks 17 August 2010 06:43:40AM 1 point [-]

I think that would be illegal, though I'm not as familiar with US rules with regard to this as UK ones. More importantly, that argument seems to rely on an unfairly expansive interpritation of what it is to work for SIAI: diverting money away from SIAI doesn't count.

Comment author: Jonathan_Graehl 17 August 2010 05:33:52AM *  1 point [-]

Sure; that's more or less what I meant. Even calling attacks these bids by SIAI competitors to in fact offer better marginal-utility efficiency was a little over-dramatic on my part.

I have only one objection to the economic argument: "assume there is already sufficient diversification in improving or maintaining human progress; then you should only give to SIAI" is a simplification that only works if the majority aren't convinced by that argument. I guess there's practically speaking no danger of that happening.

In other words, SIAI's claim can only be plausible if they promise to adjust their allocation of effort to ensure some diversity, in the unlikely event that they end up receiving humongous amounts of money (and I'm sure they'll say that they will).

By the way, I don't mean to say that an individual diversifying their charitable spending, or for globally there to be diversity in charitable spending, is an end in itself. I just feel comforted that some of it is the kind that reduces overall risk (because the perceived-most-efficient group turns out to have a blind spot in retrospect due to politics, group-think, laziness, or any number of human weaknesses).

Comment author: Mitchell_Porter 15 August 2010 11:51:14AM 10 points [-]

But what if you're increasing existential risk, because encouraging SIAI staff to censor themselves will make them neurotic and therefore less effective thinkers? We must all withhold karma from multifoliaterose until this undermining stops! :-)

Comment author: MaoShan 15 August 2010 08:26:59PM 3 points [-]

Aside from the body of the article, which is just "common" sense, given the author's opinion against the current policies of SIAI, I found the final paragraph interesting because I also exhibit "an unusually high abundance of the traits associated with Aspergers Syndrome." Perhaps possessing that group of traits gives one a predilection to seriously consider existential risk reduction by being socially detached enough to see the bigger picture. Perhaps LW is somewhat homogenously populated with this "certain kind" of people. So, how do we gain credibility with normal people?

Comment author: JoshuaZ 15 August 2010 04:28:32PM 7 points [-]

I disagree strongly with this post. In general, it is a bad idea to refrain from making claims that one believes are true simply because those claims will make people less likely to listen to other claims. That direction lies the downwards spiral of emotional manipulation, rhetoric, and other things not conducive to rational discourse.

Would one under this logic encourage the SIAI to make statements that are commonly accepted but wrong in order to make people more likely to listen to the SIAI? If not, what is the difference?

Comment author: timtyler 15 August 2010 04:32:54PM 2 points [-]

It seems as though the latter strategy could backfire - if the false statements were exposed. Keeping your mouth shut about controversial issues seems safer.

Comment author: multifoliaterose 15 August 2010 05:26:31PM *  6 points [-]

I believe that there are contexts in which the right thing to do is to speak what one believes to be true even if doing so damages public relations.

These things need to be decided on a case-by-case basis. There's no royal road to instrumental rationality.

As I say here, in the present context, a very relevant issue in my mind is that Eliezer & co. have not substantiated their most controversial claims with detailed evidence.

It's clichéd to say so, but extraordinary claims require extraordinary evidence. A claim of the type "I'm the most important person alive" is statistically many orders of magnitude more likely to be made by a poser than by somebody for whom the claim is true. Casual observers are rational to believe that Eliezer is a poser. The halo effect problem is irrational, yes, but human irrationality must be acknowledged, it's not the sort of thing that goes away if you pretend that it's not there.

I don't believe that Eliezer's outlandish and unjustified claims contribute to rational discourse. I believe that Eliezer's outlandish and unjustified claims lower the sanity waterline.

To summarize, I believe that in this particular case the costs that you allude to are outweighed by the benefits.

Comment author: timtyler 15 August 2010 06:03:07PM *  3 points [-]

Come on - he never actually claimed that.

Besides, many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

Comment author: Larks 17 August 2010 10:11:48PM *  6 points [-]

This page is now the 8th result for a google search for 'existential risk' and the 4th result for 'singularity existential risk'

Regardless of the effect SIAI may have had on the public image of existential risk reduction, it seems this is unlikely to be helpful.

Edit: it is now 7th and first, respectively. This is plusungood.

Comment author: multifoliaterose 17 August 2010 10:15:46PM 1 point [-]

I disagree. I think that my post does a good job of highlighting the fact that public aversion to thinking about existential risk reduction is irrational.

Comment author: Larks 17 August 2010 10:25:23PM *  5 points [-]

The post (as I parse it) has two points:

  • The public are irrational with respect to existential risk
  • Donating to SIAI has negative expected impact on existential risk reduction

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

As you so accurately quote Yvain, for the average reading this is not an intelligent critique of the public relations of SingInst. This is 'boo Eliezer!'

Comment author: multifoliaterose 17 August 2010 10:36:21PM 1 point [-]

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

I agree that this article is not one of the first that should appear when people Google the singularity or existential risk. I'm somewhat perplexed as to how this happened?

Despite this issue, I think that the benefits of my posting on this topic outweigh the costs. I believe that ultimately whether or not humans avoid global catastrophic risk depends much more on people's willingness to think about the topic than it does on SIAI's reputation. I don't believe that my post will lower readers' interest thinking about in existential risk.

Comment author: jimrandomh 18 August 2010 03:32:02AM 8 points [-]

This is partially because Google gives a ranking boost to things it sees as recent, so it may not stay that well ranked.

Comment author: multifoliaterose 18 August 2010 04:09:40AM 0 points [-]

Yes, good point.

Comment author: wedrifid 15 August 2010 09:05:06AM *  12 points [-]

I like your post. I wouldn't go quite so far as to ascribe outright negative utility to SIAI donations - I believe you underestimate just how much potential social influence money provides. I suspect my conclusion there would approximately mirror Vassar's.

It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

(Typo: I think you meant to include 'traits' or similar in there.)

While Eliezer occasionally takes actions that seem clearly detrimental to his cause I do suggest that Eliezer is at least in principle aware of the dynamics you discuss. His alter ego "Harry Potter" has had similar discussion with his Draco in his fanfiction.

Also note that appearing too sophisticated would be extremely dangerous. If Eliezer or SIAI gains the sort of status and credibility you would like them to seek they open themselves to threats from governments and paramilitary organisations. If you are trying to take over the world it is far better to be seen as an idealistic do gooder who writes fanfic than as a political power player. You don't want the <TLA of choice> to raid your basement, kill you and run your near complete FAI with the values of the TLA. Obviously there is some sort of balance to be reached here...

Comment author: timtyler 15 August 2010 09:28:13AM *  1 point [-]

If you are trying to take over the world it is far better to be seen as an idealistic do gooder who writes fanfic than as a political power player.

They took down the "SIAI will not enter any partnership that compromises our values" "commitment" from their web site. Maybe they are more up for partnerships these days.

Comment deleted 15 August 2010 09:14:06AM [-]
Comment author: Eliezer_Yudkowsky 18 August 2010 02:28:13PM 17 points [-]

I don't mean to dismiss the points of this post, but all of those points do need to be reinterpreted in light of the fact that I'd rather have a few really good rationalists as allies than a lot of mediocre rationalists who think "oh, cool" and don't do anything about it. Consider me as being systematically concerned with the top 5% rather than the average case. However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

Comment author: pnrjulius 12 June 2012 03:05:19AM -1 points [-]

We live in a democracy! How can you not be concerned with 95% of the population? They rule you.

If we lived in some sort of meritocratic aristocracy, perhaps then we could focus our efforts on only the smartest 5%.

As it is, it's the 95% who decide what happens in our elections, and its our elections who decides what rules get made, what projects get funded. The President of the United States could unleash nuclear war at any time. He's not likely to---but he could. And if he did push that button, it's over, for all of us. So we need to be very concerned about who is in charge of that button, and that means we need to be very concerned about the people who elect him.

Right now, 46% of them think the Earth is 6000 years old. This worldview comes with a lot of other anti-rationalist baggage like faith and the Rapture. And it runs our country. Is it just me, or does this seem like a serious problem, one that we should probably be working to fix?

Comment author: ChristianKl 18 August 2010 03:09:18PM 2 points [-]

If we think existential risk reduction is important than we should care about whether politicians think that existential risk reduction is a good idea. I don't think that a substantial number of US congressman are what you consider to be good rationalists.

Comment author: Eliezer_Yudkowsky 18 August 2010 04:28:20PM 12 points [-]

For Congress to implement good policy in this area would be performance vastly exceeding what we've previously seen from them. They called prediction markets terror markets. I expect more of the same, and expect to have little effect on them.

Comment author: Psy-Kosh 18 August 2010 08:58:00PM 9 points [-]

The flipside though is if we can frame the issue in a way that there's no obvious Democrat or Republican position, then we can, as Robin Hanson puts it, "pull the rope sideways".

The very fact that much of the existential risk stuff is "strange sounding" relative to what most people are used to really thinking about in the context of political arguments might thus act as a positive.

Comment author: multifoliaterose 18 August 2010 04:17:02PM *  6 points [-]

Agree with the points of both of ChistianKI and XiXiDu.

As for really good rationalists, I have the impression that even when it comes to them you inadvertently alienate them with higher than usual frequency on account of saying things that sound quite strange.

I think (but am not sure) that you would benefit from spending more time understanding what goes on in neurotypical people's minds. This would carry not only social benefits (which you may no longer need very much at this point) but also epistemological benefits.

However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

I'm encouraged by this remark.

Comment author: XiXiDu 18 August 2010 02:58:32PM 9 points [-]

Somewhere you said that you are really happy to be finally able to concentrate directly on the matters you deem important and don't have to raise money anymore. This obviously worked, so you won't have to change anything. But if you ever need to raise more money for a certain project, my question is how much of the money you already get comes from people you would consider mediocre rationalists?

I'm not sure if you expect to ever need a lot of money for a SIAI project, but if you solely rely on those few really good rationalists then you might have a hard time in that case.

People like me will probably always stay on your side, whether you tell them they are idiots. But I'm not sure if that might be enough in a scenario where donations are important.

Comment author: Perplexed 15 August 2010 06:22:57PM 10 points [-]

I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.

I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the "debate" with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.

Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.

Comment author: timtyler 15 August 2010 06:31:09PM *  -1 points [-]

It depends on exactly what you mean by "existential risk". Development will likely - IMO - create genetic and phenotypic takeovers in due course - as the bioverse becomes engineered. That will mean no more "wild" humans.

That is something which some people seem to wail and wave their hands about - talking about the end of the human race.

The end of earth-originating civilisation seems highly unlikely to me too - which is not to say that the small chance of it is not significant enough to discuss.

Eliezer's main case for that appears to be on http://lesswrong.com/lw/y3/value_is_fragile/

I think that document is incoherent.

Comment author: Vladimir_M 15 August 2010 06:58:39PM *  46 points [-]

I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it's possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I'm familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can't resist commenting on this article.

To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.

Probably the worst such example I've seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there's some loony cult thing going on here, unless he's also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko's departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that's interesting and appealing to people who aren't hard-core insiders.)

Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they're given openly for the whole world to see. I'm not going to get into details of concrete examples -- in particular, I do not concur unconditionally with any of the specific complaints from the above article -- but I really can't help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that's fine -- but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.

Comment author: Will_Newsome 16 August 2010 09:48:34AM 2 points [-]

Looking at my own posts I see a lot of this problem; that is, the problem of addressing only far too small an audience. Thank you for pointing it out.

Comment author: Kevin 16 August 2010 10:01:47AM *  9 points [-]

What are the scenarios where someone unfamiliar with this website would hear about Roko's deleted post?

I suppose it could be written about dramatically (because it was dramatic!) but I don't think anyone is going to publish such an account. It was bad from the perspective of most LWers -- a heuristic against censorship is a good heuristic.

This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn't allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.

If Less Wrong had a mark as dead function (on HN unregistered users don't see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko's post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.

As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn't a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It's just not the kind of thing that actually makes a PR disaster... honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don't take that as a reason to make this a PR issue.

Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.

If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you'd probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)

Comment author: homunq 31 August 2010 03:37:26PM 6 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don't really know what happened. Of course I have vague theories , and I've received a terse and unhelpful response from EY (a link to a horror story about a "riddle" which kills - a good story which I simply don't accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.

Comment author: Airedale 31 August 2010 05:49:37PM *  3 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial.

I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.

I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.

All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).

Comment author: homunq 01 September 2010 12:58:19PM 3 points [-]

I think it was 30 karma points (3 net downvotes), though I'm not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn't been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.

Comment author: [deleted] 15 August 2010 07:19:48PM 11 points [-]

Agreed.

One good sign here is that LW, unlike most other non-mainstream organizations, doesn't really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.

I've tended to overlook the weirder stuff around here, like the Roko feud -- it got filed under "That's confusing and doesn't make sense" rather than "That's an outrage." But maybe it would be more constructive to change that attitude.

Comment author: timtyler 17 August 2010 05:52:40PM *  1 point [-]

Singularitirianism, transumanism, cryonics, etc probably qualify as cults under at least some of the meanings of the term: http://en.wikipedia.org/wiki/Cult Cults do not necessarily lack critics.

Comment author: WrongBot 17 August 2010 06:37:37PM 2 points [-]

The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn't even close.

Comment author: thomblake 17 August 2010 07:04:24PM *  12 points [-]

I disagree with your assessment. Let's just look at Lw for starters.

Eileen Barker:

  1. It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.
  2. Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer's posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer's conclusions nonetheless.
  3. Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
  4. Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
  5. Nope. Though some would credit Eliezer with trying to become or create God.
  6. Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather... driven in his own overarching goal.

Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.

Shirley Harrison:

  1. I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
  2. While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
  3. Nope
  4. Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
  5. This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
  6. There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
  7. No sign of this
  8. "Exclusivity - 'we are right and everyone else is wrong'". Very yes.

Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.

Similar analysis using the other lists is left as an exercise for the reader.

Comment author: ciphergoth 17 August 2010 07:34:39PM 2 points [-]

Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I've not seen this happening - examples?

Comment author: JGWeissman 17 August 2010 07:43:08PM 7 points [-]

I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.

With this qualification, it no longer seems like evidence of being cult.

Comment author: WrongBot 17 August 2010 08:25:15PM *  14 points [-]

On Eileen Barker:

Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.

I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.

Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.

Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.

Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.


On Shirley Harrison:

I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.

What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."

There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.

Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.

So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.

My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.

Comment author: Jack 18 November 2010 08:23:06PM 3 points [-]

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

What exactly are Eliezer's qualifications supposed to be?

Comment author: jimrandomh 18 November 2010 08:38:20PM 2 points [-]

What exactly are Eliezer's qualifications supposed to be?

You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

Comment author: XiXiDu 18 November 2010 09:03:27PM *  0 points [-]

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.

Comment author: WrongBot 18 November 2010 10:58:01PM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Comment author: jimrandomh 18 November 2010 09:36:41PM *  5 points [-]

The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.

How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

Comment author: Jack 18 November 2010 09:44:05PM *  7 points [-]

I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.

No one looks at open problems in other fields this way.

Comment author: XiXiDu 19 November 2010 12:57:25PM 1 point [-]

...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it.

I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.

Comment author: Vladimir_Nesov 18 November 2010 10:09:41PM *  5 points [-]

No one looks at open problems in other fields this way.

Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.

Comment author: gwern 18 November 2010 06:29:41PM *  6 points [-]

Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.

Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.

(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)

Comment author: Sniffnoy 18 August 2010 12:04:10AM 3 points [-]

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.

Comment author: Perplexed 18 November 2010 07:10:36PM *  2 points [-]

the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I have to disagree that this "smugness" even remotely reaches the level that is characteristic of a cult.

As someone who has frequently expressed disagreement with the "doctrine" here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism - any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma

Comment author: David_Gerard 18 November 2010 09:16:46PM *  2 points [-]

Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I'm objecting to. So the moderation system - "vote up things you want more of" - works really well, and I like the comments here.

This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It's amazing what you can get away with if you show your references.

Comment author: Zvi 31 August 2010 09:01:09PM 5 points [-]

I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I'm not a cult leader. Although that does sound kind of neat. Observe:

Eileen Barker: 1. When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion 'Magic colonies' form for a few weeks. It's not substantially less isolating than what SIAI dos. Check. 2. I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check. 3. I make reasonably import, on the level of the Cryonics decision if Cryonics isn't worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check. 4. We identify other teams as 'them' reasonably often, and certain other groups are certainly viewed as the enemy. Check. 5. Nope, even fainter argument than Eliezer. 6. Again, yes, obviously.

Shirley Harrison: 1. I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check. 2. My writings count at least as much as the sequences. Check. 3. Not intentionally, but often new recruits have little idea what to expect. Check plus. 4. Totalitarian rules structure, and those who game too much often alienate friends and family. I've seen it many times, and far less of a cheat than saying that you'll be alienated from them when they are all dead and you're not because you got frozen. Check. 5. I make people believe what I want with the exact same techniques we use here. If anything, I'm willing to use slightly darker arts. Check. 6. We make the lower level people do the grunt work, sure. Check. 7. Based on some of the deals I've made, one looking to demonize could make a weak claim. Check plus. 8. Exclusivity. In spades. Check.

I'd also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.

Comment author: cousin_it 17 August 2010 07:55:41PM *  12 points [-]

That was... surprisingly surprising. Thank you.

For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I'd like to do (I've been there, thanks).

Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.

Comment author: David_Gerard 18 November 2010 09:20:45PM 1 point [-]

I love your posts, so having seen this comment I'm going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)

Comment author: cousin_it 18 November 2010 11:24:41PM 1 point [-]

Thanks!

Comment author: DanielVarga 21 August 2010 08:22:55PM 1 point [-]

I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.

Comment author: [deleted] 21 August 2010 06:59:24PM 1 point [-]

"Leaving" LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?

I've been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it's been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it's own.

Not that I claim any ownership over it, but:

I'm going to try to more clearly brand it as "A friendly place to analytically discuss fantastic, strange or bizarre ideas."

Comment author: Kevin 19 August 2010 08:08:18AM 4 points [-]

Make a top level post about the kind of thing you want to talk about. It doesn't have to be an essay, it could just be a question ("Ask Less Wrong") or a suggested topic of conversation.

Comment author: John_Baez 19 August 2010 07:58:44AM *  15 points [-]

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

Comment author: Vladimir_Nesov 19 August 2010 04:23:32PM 2 points [-]
Comment author: cousin_it 19 August 2010 08:38:29AM *  3 points [-]

Wow.

Hello.

I didn't expect that. It feels like summoning Gauss, or something.

Thank you a lot for twf!

Comment author: JGWeissman 17 August 2010 07:34:06PM *  2 points [-]

This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:

1.

A movement that separates itself from society, either geographically or socially;

It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.

Comment author: timtyler 17 August 2010 06:52:49PM *  -2 points [-]

That's the pejorative usage. There is also:

"Cult also commonly refers to highly devoted groups, as in:

  • Cult, a cohesive group of people devoted to beliefs or practices that the surrounding culture or society considers to be outside the mainstream

    • Cult of personality, a political leader and his following, voluntary or otherwise
    • Destructive cult, a group which exploits and destroys its members or even non-members
    • Suicide cult, a group which practices mass self-destruction, as occurred at Jonestown
    • Political cult, a political group which shows cult-like features"

http://en.wikipedia.org/wiki/Cults_of_personality

http://en.wikipedia.org/wiki/Cult_following

http://en.wikipedia.org/wiki/Cult_%28religious_practice%29

Comment author: prase 16 August 2010 04:01:47PM 21 points [-]

I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog's header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.

Comment author: Morendil 16 August 2010 04:26:50PM 4 points [-]

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality

Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.

I do agree with you that too much of the newer material keeps returning to those few habitual topics that are "superstimuli" for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)

A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.

Comment author: prase 16 August 2010 05:15:48PM 1 point [-]

Thanks for the link, I haven't known YANSS.

As for overcoming absurdity heuristics, more helpful would be to illustrate its inaproppriateness (is this a real word?) on thoughts which are seemingly absurd while having a lot of data proving them right, rather than predictions like Singularity which are mostly based on ... just different heuristics.

Comment author: Rain 15 August 2010 02:28:19PM *  14 points [-]

Have we seen any results (or even progress) come from the SIAI Challenge Grants, which included a Comprehensive Singularity FAQ and many academic papers dealing directly with the topics of concern? These should hopefully be less easy to ridicule and provide an authoritative foundation after the peer review process.

Edit: And if they fail to come to fruition, then we have some strong evidence to doubt SIAI's effectiveness.

Comment author: Eneasz 24 August 2010 05:46:22PM 30 points [-]

informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible

I believe you are completely ignoring the status-demolishing effects of hypocrisy and insincerity.

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty and did so in a way I've never seen other people able to pull off - without sounding nuts at all. In fact, sounding very reasonable. I've since updated enough that I no longer wince and hold my breath, I smile and await the triumph.

If, as most people (and nearly all politicians) do, he would have waffled and presented an argument that he doesn't honestly hold, but that is more publicly acceptable, I'd feel disappointed and a bit sickened and I'd tune out the rest of what he has to say.

Hypocrisy is transparent. People (including neurotypical people) very easily see when others are making claims they don't personally believe, and they universally despise such actions. Politicians and lawyers are among the most hated groups in modern societies, in large part because of this hypocrisy. They are only tolerated because they are seen as a necessary evil.

Right now, People Working To Reduce Existential Risk are not seen as necessary. So it's highly unlikely that hypocrisy among them would be tolerated. They would repel anyone currently inclined to help, and their hypocrisy wouldn't draw in any new support. The answer isn't to try to deceive others about your true beliefs, it is to help make those beliefs more credible among the incredulous.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Comment author: pnrjulius 12 June 2012 02:59:28AM 2 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

So hypocrisy does have upsides, and maybe we shouldn't dismiss it so easily.

Comment author: Carinthium 23 November 2010 09:19:06AM *  0 points [-]

The above is a good comment, but 26 karma? How did it deserve that?

Comment author: wnoise 24 November 2010 02:06:58AM 1 point [-]

Karma (despite the name) has very little to do with "deserve". All it really means is that 26 (now 25) more people desire more content like this than desire less content like this.

Comment author: Carinthium 24 November 2010 02:26:46AM -1 points [-]

On the other hand, it is a good thing to shift the Karma system to better resemble a system based on merit- i.e. they should vote down the comment up to a point because although it is a good one it doesn't deserve it's very high score.

Comment author: wnoise 24 November 2010 05:19:53PM *  6 points [-]

Why should something that is mildly liked by many not have a higher score than something that is highly liked by fewer?

In any case, it's rather hard to do. How do you propose to make your standards for a good comment the one other people use? Each individual sets their own level at which they will up- or down-vote a comment or post. They can indeed take into account the current score of a post, but that does rather poorly as others come by and change it. Should the first guy who up-voted that check back and see if it is now too highly rated? That seems hardly worth his time. And pretty much by definition, the guy who voted it from 25 to 26 was happier with the score at 26 than at 25, so at least one person does think it was worth 26.

And what happens as norms change as to what a "good score" is as more comments have more eyeballs and voters looking at them?

Or we could all just take karma beyond "net positive" and "net negative" a whole lot less seriously.

Complaining about a given score and the choices of others certainly isn't likely to go much of anywhere.

Comment author: Eliezer_Yudkowsky 24 August 2010 06:30:57PM 17 points [-]

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty

I am so glad that someone notices and appreciates this.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Agreed.

Comment author: komponisto 16 August 2010 12:38:38AM *  32 points [-]

I'll state my own experience and perception, since it seems to be different from that of others, as evidenced in both the post and the comments. Take it for what it's worth; maybe it's rare enough to be disregarded.

The first time I heard about SIAI -- which was possibly the first time I had heard the word "singularity" in the technological sense -- was whenever I first looked at the "About" page on Overcoming Bias, sometime in late 2006 or early 2007, where it was listed as Eliezer Yudkowsky's employer. To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

Now, when someone has made that kind of demonstration of rationality, I just don't have much problem listening to whatever they have to say, regardless of how "outlandish" it may seem in the context of most human discourse. Maybe I'm exceptional in this respect, but I've never been under the impression that only "normal-sounding" things can be true or important. At any rate, I've certainly never been under that impression to such an extent that I would be willing to dismiss claims made by the author of The Simple Truth and A Technical Explanation of a Technical Explanation, someone who understands things like the gene-centered view of evolution and why MWI exemplifies rather than violates Occam's Razor, in the context of his own professional vocation!

I really don't understand what the difference is between me and the "smart people" that you (and XiXiDu) know. In fact maybe they should be more inclined to listen to EY and SIAI; after all, they probably grew up reading science fiction, in households where mild existential risks like global warming were taken seriously. Are they just not as smart as me? Am I unusually susceptible to following leaders and joining cults? (Don't think so.) Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims? (But why wouldn't they as well, if they're "smart"?)

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

Comment author: hegemonicon 17 August 2010 08:34:31PM *  30 points [-]

I STRONGLY suspect that there is a enormous gulf between finding out things on your own and being directed to them by a peer.

When you find something on your own (existential risk, cryonics, whatever), you get to bask in your own fortuitousness, and congratulate yourself on being smart enough to understand it's value. You get a boost in (perceived) status, because not only do you know more than you did before, you know things other people don't know.

But when someone else has to direct you to it, it's much less positive. When you tell someone about existential risk or cryonics or whatever, the subtext is "look, you're weren't able to figure this out by yourself, let me help you". No matter how nicely you phrase it, there's going to be resistance because it comes with a drop in status - which they can avoid by not accepting whatever you're selling. It actually might be WORSE with smart people who believe that they have most things "figured out".

Comment author: multifoliaterose 16 August 2010 09:18:21AM *  9 points [-]

Thanks for your thoughtful comment.

To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

I know some people who have had this sort of experience. My claim is not that Eliezer has uniformly repelled people from thinking about existential risk. My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims?

My guess would be that this is it. I'm the same way.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality, or what the sign of that correlation is. People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc. more often than usual. Statistically, people who make strange-sounding claims are not worth listening to. Too much willingness to listen to strange-sounding claims can easily result in one wasting large portions of one's life.

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

Comment author: komponisto 16 August 2010 11:10:39AM 7 points [-]

Thank you for your thoughtful reply; although, as will be evident, I'm not quite sure I actually got the point across.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality,

I didn't realize at all that by "smart" you meant "instrumentally rational"; I was thinking rather more literally in terms of IQ. And I would indeed expect IQ to correlate positively with what you might call openness. More precisely, although I would expect openness to be only weak evidence of high IQ, I would expect high IQ to be more significant evidence of openness.

People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc...

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics. Yes, of course, if all you know about a person is that they make strange claims, then you should by default assume they're a UFO/New Age type. But I submit that the fact that Eliezer has written things like these decisively entitles him to a pass on that particular inference, and anyone who doesn't grant it to him just isn't very discriminating.

Comment author: multifoliaterose 16 August 2010 12:23:40PM 5 points [-]

One more point - though I could immediately recognize that there's something important to some of what Eliezer says, the fact that he makes outlandish claims did make me take longer to get around to thinking seriously about existential risk. This is because of a factor that I mention in my post which I quote below.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm not proud that I'm so influenced, but I'm only human. I find it very plausible that there are others like me.

Comment author: multifoliaterose 16 August 2010 12:04:13PM *  11 points [-]

And I would indeed expect IQ to correlate positively with what you might call openness.

My own experience is that the correlation is not very high. Most of the people who I've met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.

I didn't realize at all that by "smart" you meant "instrumentally rational";

I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they're open minded or not) interested in existential risk.

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn't mean that they're holistically irrational. My own experience has been that my openness has both benefits and drawbacks.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics.

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand. See bentram's comment

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

Comment author: wedrifid 16 August 2010 12:52:27PM 0 points [-]

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

I would perhaps expand 'conformity' to include neighbouring social factors - in-group/outgroup, personal affiliation/alliances, territorialism, etc.

Comment author: komponisto 16 August 2010 12:33:25PM *  4 points [-]

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers

You may be right about this; perhaps Eliezer should in fact work on his PR skills. At the same time, we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand

This is a problem; no question about it.

Comment author: multifoliaterose 16 August 2010 12:39:14PM *  6 points [-]

At the same time we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

I agree with this. It's all a matter of degree. Maybe at present one has to be in the top 1% of the population in nonconformity to be interested in existential risk and with better PR one could reduce the level of nonconformity required to the top 5% level.

(I don't know whether these numbers are right, but this is the sort of thing that I have in mind - I find it very likely that there are people who are nonconformist enough to potentially be interested in existential risk but too conformist to take it seriously unless the people who are involved seem highly credible.)

Comment author: ciphergoth 16 August 2010 10:48:59AM 12 points [-]

For my part, I keep wondering how long it's going to be before someone throws his "If you don't sign up your kids for cryonics then you are a lousy parent" remark at me, to which I will only be able to say that even he says stupid things sometimes.

(Yes, I'd encourage anyone to sign their kids up for cryonics; but not doing so is an extremely poor predictor of whether or not you treat your kids well in other ways, which is what the term should mean by any reasonable standard).

Comment author: multifoliaterose 16 August 2010 11:47:35AM 4 points [-]

Yes, this is the sort of thing that I had in mind in making my cryonics post - as I said in the revised version of my post, I have a sense that a portion of the Less Wrong community has the attitude that cryonics is "moral" in some sort of comprehensive sense.

Comment author: James_Miller 18 August 2010 03:00:47PM 5 points [-]

If you believe that thousands of people die unnecessarily every single day then of course you think cryonics is a moral issue.

If people in the future come to believe that we should have know that cryonics would probably work then they might well conclude that our failure to at least offer cryonics to terminally ill children was (and yes I know what I'm about to write sounds extreme and will be off-putting to many) Nazi-level evil.

Comment author: multifoliaterose 18 August 2010 03:42:52PM 1 point [-]

I've thought carefully about this matter and believe that there's good reason to doubt your prediction. I will detail my thoughts on this matter in a later top level post.

Comment author: James_Miller 18 August 2010 02:55:30PM *  8 points [-]

Given Eliezer's belief about the probability of cryonics working and belief that others should understand that cryonics has a high probability of working, his statement that "If you don't sign up your kids for cryonics then you are a lousy parent" is not just correct but trivial.

One of the reasons I so enjoy reading Less Wrong is Eliezer's willingness to accept and announce the logical consequences of his beliefs.

Comment author: ciphergoth 18 August 2010 03:00:15PM *  4 points [-]

There is a huge gap between "you are doing your kids a great disservice" and "you are a lousy parent": "X is an act of a lousy parent" to me implies that it is a good predictor of other lousy parent acts.

EDIT: BTW I should make clear that I plan to try to persuade some of my friends to sign up themselves and both their kids for cryonics, so I do have skin in the game...

Comment author: FAWS 18 August 2010 03:04:41PM *  7 points [-]

I'm not completely sure I disagree with that, but do you have the same attitude towards parents who try to heal treatable cancer with prayer and nothing else, but are otherwise great parents?

Comment author: ciphergoth 18 August 2010 03:31:11PM 4 points [-]

I think that would be a more effective predictor of other forms of lousiness: it means you're happy to ignore the advice of scientific authority in favour of what your preacher or your own mad beliefs tell you, which can get you into trouble in lots of other ways.

That said, this is a good counter, and it does make me wonder if I'm drawing the right line. For one thing, what do you count as a single act? If you don't get cryonics for your first child, it's a good predictor that you won't for your second either, so does that count? So I think another aspect of it is that to count, something has to be unusually bad. If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

Comment author: katydee 16 August 2010 10:31:46AM 9 points [-]

Also, keep in mind that reading the sequences requires nontrivial effort-- effort which even moderately skeptical people might be unwilling to expend. Hopefully Eliezer's upcoming rationality book will solve some of that problem, though. After all, even if it contains largely the same content, people are generally much more willing to read one book rather than hundreds of articles.

Comment author: Oligopsony 15 August 2010 11:20:09AM 20 points [-]

I'm new to all this singularity stuff - and as an anecdotal data point, I'll say a lot of it does make my kook bells go off - but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

There is, of course, the question of fundraising. ("This problem is too complicated for you to help with directly, but you can give us money..." sets off further alarm bells.) But from that perspective someone who thinks you're nuts is no worse than someone who hasn't heard of you. You can ramp up the variance of people's opinions and come out better financially.

Comment author: jacob_cannell 25 August 2010 03:57:50AM *  0 points [-]

Don't you realize the default scenario?

The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to 'control it', fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.

The question then is what kind of god do we want to unleash.

Comment author: ata 25 August 2010 04:10:39AM *  8 points [-]

While we're in a thread with "Public Relations" in its title, I'd like to point out that calling an AGI a "god", even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn't going to help much in combating the singularity-as-religion pattern completion.

Comment author: jacob_cannell 25 August 2010 07:00:38AM *  -2 points [-]

Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.

I read the "by (some) definition" and I find it actually supports the cluster mapping utility of the god term as it applies to AI's. "Scary powerful optimization process" just doesn't instantly convey the proper power relation.

But nonetheless, I do consider your public relations image point to be important. But I'm not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.

Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.

Understanding the latter domain is important for creating good AI and CEV and all that.

Comment author: Pavitra 25 August 2010 07:23:23AM 2 points [-]

Calling an AGI a god too easily conjures up visions of a benevolent force. Even those who consider that it might not have our best interests at heart tend to think of dystopian science fiction.

I use the phrase "robot Cthulhu", because the Singularity will probably eat the world without particularly noticing or caring that there's someone living on it.

Comment author: kodos96 25 August 2010 08:56:03AM 2 points [-]

Calling an AGI a god too easily conjures up visions of a benevolent force

That really depends on how you feel about religion/god in the first place. To a guy like me, who is, as Hitchens is fond of describing himself, "not just an atheist, but an anti-theist", the uFAI/god connection makes me want to donate everything I have to SIAI to make sure it doesn't happen.

Maybe that's just me.

Comment author: complexmeme 19 August 2010 02:43:21AM *  1 point [-]

Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)

Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willing to wrestle with my skepticism than most, and I'm still probably a "mediocre rationalist" (to put it in Eliezer's terms).

Comment author: [deleted] 19 August 2010 02:35:43AM *  3 points [-]

It sounds to me like half of the perceived public image problem comes from apparently blurred lines between the SIAI and LessWrong, and between the SIAI and Eliezer himself. These could be real problems - I generally have difficulty explaining one of the three without mentioning the other two - but I'm not sure how significant it is.

The ideal situation would be that people would evaluate SIAI based on its publications, the justification of the research areas, and whether the current and proposed projects satisfy those goals best, are reasonably costed, and are making progress.

Whoever actually holds these as the points to be evaluated will find the list of achievements. Individual projects all have detailed proposals and a budget breakdown, since donors can choose to donate directly to one research project or another.

Finally, a large number of those projects are academic papers. If you dig a bit, you'll find that many of these papers are submitted at academic and industry conferences. Hosting the Singularity Summit doesn't hurt either.

It doesn't make sense to downplay a researcher's strange viewpoints if those viewpoints seem valid. Eliezer believes his viewpoint to be valid. LessWrong, a project of his, has a lot of people who agree with his ideas. There are also people who disagree with some of his ideas, but the point is that it shouldn't matter. LessWrong is a project of SIAI, not the organization itself. Support on this website of his ideas should have little to do with SIAI's support of his ideas.

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated. This despite the fact that he's written half of the publications.

Here are the questions (that tie to your post) which I think are worth discussing on public relations, if not the contents of the publications:

  • Do people equate "The views of Eliezer Yudkowsky" with "The views of SIAI"? Do people view the research program or organization as "his" project?
  • Which people, and to what extent?
  • Is this good or bad, and how important is it?

The optimal answer to those questions is the one that leads the most AI researchers to evaluate the most publications with the respect of serious scrutiny and consideration.

I'll repeat that other people have published papers with the SIAI, that their proposals are spelled out, that some papers are presented at academic and industry conferences, and that the SIAI's Singularity Summit hosts speakers who do not agree with all of Eliezer's opinions, who nonetheless associate with the organization by attendance.

Comment author: nonhuman 21 August 2010 03:37:13AM 4 points [-]

I feel it's worth pointing out that just because something should be, doesn't mean it is. You state:

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated.

I agree with the sentiment, but how practical is it? Just because it would be incorrect to equate Eliezer and the SIAI doesn't meant that people won't do it. Perhaps it would be reasonable to say that the people who fail to make the distinction are also the people on whom it's not worth expending the effort trying to explicate the situation, but I suspect that it is still the case that the majority of people are going to have a hard time not making that equation if they even try at all.

The purpose of this article, I would presume to say, is that public relations actually does serve a valid and useful purpose. It is not a wasted effort to ensure that the ideas that one considers true, or at least worthwhile, are presented in the sort of light that encourages people to take them seriously. This is something that I think many people of a more intellectual bent often fail to consider; though some of us might actually invest time and effort into determining for ourselves whether an idea is good or not, I would say the majority do not and instead rely on trusted sources to guide them (with often disastrous results).

Again, it may just be that we don't care about those people (and it's certainly tempting to go that way), but there may be times when quantity of supporters, in addition to quality, could be useful.

Comment author: [deleted] 21 August 2010 06:32:48PM 2 points [-]

We don't disagree on any point that I can see. I was contrasting an ideal way of looking at things (part of what you quoted) from how people might actually see things (my three bullet-point questions).

As much as I enjoy Eliezer's thoughts and respect his work, I'm also of the opinion that one of the tasks the SIAI must work on (and almost certainly is working on) is keeping his research going while making the distinction between the two entities more obvious. But to whom? The research community should be the first and primary target.

Coming back from the Summit, I feel that they're taking decent measures toward this. The most important thing to do is for the other SIAI names to be known. Michael Vassar's is the easiest to get people to hold because of the name of his role, and he was acting as the SIAI face more than Eliezer was. At this point, a dispute would make the SIAI look unstable - they need positive promotion of leadership and idea diversity, more public awareness of their interactions with academia, and that's about it.

Housing a clearly promoted second research program would solve this problem. If only there was enough money, and a second goal which didn't obviously conflict with the first, and the program still fit under the mission statement. I don't know if that is possible. Money aside, I think that it is possible. Decision theoretic research with respect to FAI is just one area of FAI research. Utterly essential, but probably not all there is to do.

Comment author: [deleted] 19 August 2010 02:46:19AM 6 points [-]

To top it off, the SIAI is responsible for getting James Randi's seal of approval on the Singularity being probable. That's not poisoning the meme, not one bit.

Comment author: [deleted] 18 August 2010 03:46:37PM *  8 points [-]

I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.

Comment author: josh0 21 August 2010 03:53:41AM 6 points [-]

It may be true that many are worried about 'the end of the world', however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God's Kingdom on Earth, Olam Haba, or whatever.

We could rebrand 'existential risk' as 'end time' and gain vast numbers of followers. But I doubt that would actually be desirable.

I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. 'Friendly' does sound a bit unserious.

Comment author: zemaj 19 August 2010 10:09:06AM 9 points [-]

"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.

I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.

In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.

Comment author: James_Miller 18 August 2010 03:30:54PM 7 points [-]

Given how superficially insane Eliezer's beliefs seem he has done a fantastic job of attracting support for his views.

Eliezer is popularizing his beliefs, not directly through his own writings, but by attracting people (such as conference speakers and this comment writer who is currently writing a general-audience book) who promote understanding of issues such as intelligence explosion, unfriendly AI and cryonics.

Eliezer is obviously not neurotypical. The non-neurotypical have a tough time making arguments that emotionally connect. Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Eliezer might not have won the backing of people such as super-rationalist self-made tech billionaire Peter Thiel had Eliezer devoted less effort to rational arguments.

Comment author: FAWS 18 August 2010 03:37:27PM 1 point [-]

Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Do you mean comparative disadvantage? Otherwise I can't make sense of what you are trying to say. Not that I'd agree with that anyway, Eliezer is very good rhetorically, and I'm suspicious of psychological diagnoses performed over the internet.

Comment author: James_Miller 18 August 2010 03:44:55PM *  2 points [-]

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

I have twice talked with Eliezer in person, seen in person a few of his talks, watched several videos of him talking and for family reasons I have read a huge amount about the non-neurotypical.

Comment author: FAWS 18 August 2010 04:07:54PM 2 points [-]

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

??? So you mean he has a massive absolute advantage, but is also so hugely better at other things compared to normal people it's still not worth his time??? Or does that actually mean that he has an absolute advantage of unspecified size, that happens to be very much non-comparative? What someone only vaguely familiar with economic terminology like me might call a "massively non-comparative advantage"?

Comment author: Larks 17 August 2010 06:40:47AM 17 points [-]

It must be said that the reason no-one from SingInst has commented here is they're all busy running the Singularity Summit, a well-run conference full of AGI researchers, the one group that SingInst cares about impressing more than any other. Furthermore, Eliezer's speech was well received by those present.

I'm not sure whether attacking SingInst for poor public relations during the one week when everyone is busy with a massive public relations effort is either very ironic or very Machiavellian.

Comment author: Jonathan_Graehl 16 August 2010 08:58:49PM 10 points [-]

When I'm talking to someone I respect (and want to admire me), I definitely feel an urge to distance myself from EY. I feel like I'm biting a social bullet in order to advocate for SIAI-like beliefs or action.

What's more, this casts a shadow over my actual beliefs.

This is in spite of the fact that I love EY's writing, and actually enjoy his fearless geeky humor ("hit by a meteorite" is indeed more fun than the conventional "hit by a bus").

The fear of being represented by EY is mostly due to what he's saying, not how he's saying it. That is, even if he were always dignified and measured, he'd catch nearly as much flak. If he'd avoided certain topics entirely, that would have made a significant difference, but on the other hand, he's effectively counter-signaled that he's fully honest and uncensored in public (of course he is probably not, exactly), which I think is also valuable.

I think EY can win by saying enough true things, convincingly, that smart people will be persuaded that he's credible. It's perhaps true that better PR will speed the process - by enough for it to be worth it? That's up to him.

The comments in this diavlog with Scott Aaronson - while some are by obvious axe-grinders - are critical of EY's manner. People appear to hate nothing more than (what they see as) undeserved confidence. Who knows how prevalent this adverse reaction to EY is, since the set of commenters is self-selecting.

People who are floundering in a debate with EY (e.g. Jason Lanier) seem to think they can bank on a "you crazy low-status sci-fi nerd" rebuttal to EY. This can score huge with lazy or unintellectual people if it's allowed to succeed.

Comment author: Eneasz 24 August 2010 05:52:00PM 1 point [-]

This can score huge with lazy or unintellectual people if it's allowed to succeed.

What is the likelihood that lazy or unintellectual people would have ever done anything to reduce existential risk regardless of any particular advocate for/against?

Comment author: JoshuaZ 24 August 2010 06:00:52PM *  3 points [-]

They might give money to the people will actually use do the reduction in existential risk. I'd also note that even more people who are generally intellectuals or at least think of themselves as intellectuals, this sort of argument can if phrased in the right way still impact them; scifi is still a very low status association for many of those people.

Comment author: Jonathan_Graehl 24 August 2010 06:51:39PM 1 point [-]

I think Eneasz is right, but I agree with you that we should care about the support of ordinary people and those who choose to specialize elsewhere.

I was thinking also of the motivational effect of average people's (dis)approval on the gifted. Sure, many intellectual milestones were first reached by those who either needed less to be accepted, or drew their in-group/out-group boundary more tightly around themselves, but social pressure matters.

Comment author: thomblake 16 August 2010 02:17:13PM 3 points [-]

This post reminds me of the talk at this year's H+ summit by Robert Tercek. Amongst other things, he was pointing out how the PR battle over transhumanist issues was already lost in popular culture, and that the transhumanists were not helping matters by putting people with very freaky ideas in the spotlight.

I wonder if there are analogous concerns here.

Comment author: mranissimov 16 August 2010 07:24:11AM 3 points [-]

Just to check... have I said any "naughty" things analogous to the Eliezer quote above?

Comment author: wedrifid 16 August 2010 07:57:52AM 1 point [-]

Not to my knowledge... but Eliezer makes his words far more prominent that you do.

Comment author: mranissimov 04 September 2010 03:24:10AM 1 point [-]

Only on LessWrong. In the wider world, more people actually read my words!

Comment author: JamesAndrix 16 August 2010 03:02:28AM 12 points [-]

Warning: Shameless Self Promotion ahead

Perhaps part of the difficulty here is the attempt to spur a wide rationalist community on the same site frequented by rationalists with strong obscure positions on obscure topics.

Early in Lesswrong discussion of FAI was discouraged so that it didn't just become a site about FAI and the singularity, but a forum about human rationality more generally.

I can't track down an article[s] from EY about how thinking about AI can be too absorbing, and how in order properly create a community, you have to truly put aside the ulterior motive of advancing FAI research.

It might be wise for us to again deliberately shift our focus away from FAI and onto human rationality and how it can be applied more widely (say to science in general.)

Enter the SSP: For months now I've been brainstorming a community to educate people on the creation and use of 3D printers, with the eventual goal of making much better 3D printers. So this is a different big complicated problem with a potential high payoff, and it ties into many fields, provides tangible previews of the singularity, can benefit from the involvement of people with almost any skill set, and seems to be much safer than advancing AI, nanotech, or genetic engineering.

I had already intended to introduce rationality concepts where applicable and link a lot to Lesswrong. but if a few LWers were willing to help, It could become a standalone community of people committed to thinking clearly about complex technical and social problems, with a latent obsession with 3D printers.

Comment author: Jordan 15 August 2010 06:33:45PM 6 points [-]

Damnit! My smug self assurance that I could postpone thinking about these issues seriously because I'm an SIAI donor .... GONE! How am I supposed to get any work done now?

Seriously though, I do wish the SIAI toned down its self importance and incredible claims, however true they are. I realize, of course, that dulling some claims to appear more credible is approaching a Dark Side type strategy, but... well, no buts. I'm just confused.

Comment author: multifoliaterose 16 August 2010 09:08:17AM *  0 points [-]

Edit: I misunderstood what Jordan was trying to say - the previous version of this comment is irrelevant to the present discussion and so I've deleted it.

Comment author: Jordan 16 August 2010 01:47:32PM *  2 points [-]

Deciding that the truth unconditionally deserves top priority seems to me to be an overly convenient easy way out of confronting the challenges of demanded by instrumental rationality.

No one is claiming that honesty deserves top priority. I would lie to save someone's life, or to make a few million dollars, etc. In the context of SIAI though, or any organization, being manipulative can severely discredit you.

I believe that when one takes into account unintended consequences, when Eliezer makes his most incredible claims he lowers overall levels of epistemic rationality rather than raising overall levels of epistemic rationality.

If he were to go back on his incredible claims, or even only make more credible claims in the future, how would he reconcile the two when confronted? If someone new to Eliezer read his tame claims, then went back and read his older, more extreme claims, what would they think? To many people this would enforce the idea that SIAI is a cult, and that they are refining their image to be more attractive.

All of that said, I do understand where you're coming from intuitively, and I'm not convinced that scaling back some of the SIAI claims would ever have a negative effect. Certainly, though, a public policy conversation about it would cast a pretty manipulative shade over SIAI. Hell, even this conversation could cast a nasty shade to some onlookers (to many people trying to judge SIAI, the two of us might be a sufficiently close proxy, even though we have no direct connections).

Comment author: multifoliaterose 16 August 2010 02:44:08PM 2 points [-]

Okay, I misunderstood where you were coming from earlier, I thought you were making a general statement about the importance of stating one's beliefs. Sorry about that.

In response to your present comments, I would say that though the phenomenon that you have in mind may be a PR issue, I think it would be less of a PR issue than what's going on right now.

One thing that I would say is that I think that Eliezer would come across as much more credible simply by accompanying his weird sounding statements with disclaimers of the type "I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..." See my remark here.

Comment author: Jordan 16 August 2010 05:57:54PM *  5 points [-]

I mostly agree, although I'm still mulling it and think the issue is more complicated than it appears. One nitpick:

"I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..."

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

"I know it's hard to believe, but it's likely an AI will kill us all in the future."

could become

"It's hard for me to come to terms with, but there doesn't seem to be any natural safeguards preventing an AI from doing serious damage."

Comment author: multifoliaterose 16 August 2010 06:43:01PM *  1 point [-]

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

Sure, I totally agree with this - I prefer your formulation to my own. My point was just that there ought to be some disclaimer - the one that I suggested is a weak example.

Edit: Well, okay, actually I prefer:

"It took me a long time to come to terms with, but there don't seem to be any natural safeguards preventing an AI from doing serious damage."

If one has actually become convinced of a position, it sounds disingenuous to say that it's hard for one to come to terms with at present, but any apparently absurd position should at some point have been hard to come to terms with.

Adding such a qualifier is a good caution against appearing to be placing oneself above the listener. It carries the message "I know how you must be feeling about these things, I've been there too."