XiXiDu comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 12 August 2010 06:30:10PM *  0 points [-]
  1. Maximal utlity for everyone is a preference but secondary. Most of all in whatever I support my personal short and long-term benefit is a priority.
  2. No
  3. Yes (Edit)
  4. Uncertain/Unable to judge.
  5. Maybe, but I don't know of one. That doesn't mean that we shouldn't create one, if only for the uncertainty of Eliezer Yudkowsky' possible unstated goals.
  6. Uncertain/Unable to judge. See 5.
Comment author: utilitymonster 12 August 2010 06:48:44PM 2 points [-]

Given your answers to 1-3, you should spend all of your altruistic efforts on mitigating x-risk (unless you're just trying to feel good, entertain yourself, etc.).

For 4, I shouldn't have asked you whether you "think" something beats negotiating a positive singularity in terms of x-risk reduction. Better: Is there some other fairly natural class of interventions (or list of potential examples) such that, given your credences, has a higher expected value? What might such things be?

For 5-6, perhaps you should think about what such organizations might be. Those interested in convincing XiXiDu might try listing some alternative best x-risk mitigating groups and provide arguments that they don't do as well. As for me, my credences are highly unstable in this area, so info is appreciated on my part as well.