Rain comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 28 December 2010 03:58:11PM *  5 points [-]

Please inform me if anyone knows of a better charity.

As long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly).

If you don't mind I would like you to state some numerical probability estimates:

  1. The risk of human extinction by AI (irrespective of countermeasures).
  2. Probability of the SIAI succeeding to implement an AI (see 3.) taking care of any risks thereafter.
  3. Estimated trustworthiness of the SIAI (signaling common good (friendly AI/CEV) while following selfish objectives (unfriendly AI)).

I'd also like you to tackle some problems I see regarding the SIAI in its current form:

Transparency

How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals?

Agenda and Progress

The current agenda seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals?

  • How do you estimate their progress?
  • What are they working on right now?
  • Are there other organisations working on some of the sub-goals that make better progress?

As multifoliaterose implied here, at the moment the task to recognize humans as distinguished beings already seems to be too broad a problem to tackle directly. Might it be more effective, at this point, to concentrate on supporting other causes leading towards the general goal of AI associated existential risk mitigation?

Third Party Review

Without being an expert and without any peer review, how sure can you be about the given premises (AI going FOOM etc.) and the effectiveness of their current agenda?

Also what conclusion should one draw from the fact that at least 2 people who have been working for the SIAI, or have been in close contact with it, do disagree with some of the stronger claims. Robin Hanson seems not to be convinced that donating to the SIAI is an effective way to mitigate risks from AI? Ben Goertzel does not believe into the scary idea. And Katja Grace thinks AI is no big threat.

More

My own estimations

  • AI going FOOM: 0.1%
  • AI going FOOM being an x-risk: 5%
  • AI going FOOM being an x-risk is prevented by the SIAI: 0.01%
  • That the SIAI is trustworthy of pursuing to create the best possible world for all human beings: 60%

Therefore that a donation to the SIAI does pay off: 0.0000003%

Comment author: Rain 28 December 2010 09:14:07PM *  4 points [-]

To restate my original question, is there anyone out there doing better than your estimated 0.0000003%? Even though the number is small, it could still be the highest.

Comment author: XiXiDu 29 December 2010 10:15:46AM 2 points [-]

To restate my original question, is there anyone out there doing better than your estimated 0.0000003%?

None whose goal is to save humanity from an existential risk. Although asteroid surveillance might come close, I'm not sure. It is not my intention to claim that donating to the SIAI is worthless, I believe that the world does indeed need an organisation that does tackle the big picture. In other words, I am not saying that you shouldn't be donating to the SIAI, I am happy someone does (if only because of LW). But the fervor in this thread seemed to me completely unjustified. One should seriously consider if there are other groups worthy of promotion or if there should be other groups doing the same as the SIAI or being dealing with one of its sub-goals.

My main problem is how far I should go to neglect other problems in favor of some high-impact low-probability event. If your number of possible beings of human descent is high enough, and you assign each being enough utility, you can outweigh any low probability. You could probably calculate not to help someone who is drowning because 1.) you'd risk your own life and all the money you could make to donate to the SIAI 2.) in that time you could tell 5 people about existential risks from AI. I am exaggerating to highlight my problem. I'm just not educated enough yet, I have to learn more math, especially probability. Right now I feel that it is unreasonable to donate my whole money (or a lot) to the SIAI.

It really saddens me to see how often LW perceives any critique of the SIAI as ill-intentioned. As if people want to destroy the world. There are some morons out there, but most really would like to save the world if possible. They just don't see that the SIAI is a reasonable choice to do so.

Comment author: Rain 29 December 2010 04:03:51PM 3 points [-]

the fervor in this thread seemed to me completely unjustified. [...] My main problem is how far I should go to neglect other problems in favor of some high-impact low-probability event.

I agree with SIAI's goals. I don't see it as "fervor". I see it as: I can do something to make this world a better place (according to my own understanding, in a better way than any other possible), therefore I will do so.

I compartmentalize. Humans are self-contradictory in many ways. I can send my entire bank account to some charity in the hopes of increasing the odds of friendly AI, and I can buy a hundred dollar bottle of bourbon for my own personal enjoyment. Sometimes on the same day. I'm not ultra-rational or pure utilitarian. I'm a regular person with various drives and desires. I save frogs from my stairwell rather than driving straight to work and earning more money. I do what I can.

Comment author: Rain 29 December 2010 02:29:17PM 2 points [-]

One should seriously consider if there are other groups worthy of promotion or if there should be other groups doing the same as the SIAI or being dealing with one of its sub-goals.

I have seriously considered it. I have looked for such groups, here and elsewhere, and no one has ever presented a contender. That's why I made my question as simple and straightforward as possible: name something more important. No one's named anything so far, and I still read for many hours each week on this and other such topics, so hopefully if one arises, I'll know and be able to evaluate it.

I donate based on relative merit. As I said at the end of my original supporting post: so far, no one else seems to come close to SIAI. I'm comfortable with giving away a large portion of my income because I don't have much use for it myself. I post it here because it encourages others to give of themselves. I think it's the right thing to do.

I know it's hard to see why. I wish they had better marketing materials. I was really hoping the last challenge, with projects like a landing page, a FAQ, etc., would make a difference. So far, I don't see much in the way of results, which is upsetting.

I still think it's the right place to put my money.