You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on against "AI risk" - Less Wrong Discussion

24 Post author: Wei_Dai 11 April 2012 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread.

Comment author: XiXiDu 12 April 2012 10:46:48AM *  4 points [-]

SI/LW sometimes gives the impression of being a doomsday cult...

I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.

What I endorse:

  • Risks from AI (including WBE) are an underfunded research area and might currently be the best choice for anyone who seeks to do good by contribute money to an important cause.

What I think is unjustified:

  • This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.

I would have to assign a +90% probability to risks from AI, to pose an existential risk, to endorse the second stance. I would further have to be highly confident that we will have to face associated risks within this century and that the model uncertainty associated with my estimates is low.

You might argue that I would endorse the second stance if NASA told me that there was a 20% chance of an asteroid hitting Earth and that they need money to deflect it. I would indeed. But that seems like a completely different scenario to me.

That intuition might stem from the possibility that any estimates regarding risks from AI are very likely to be wrong, whereas in the example case of an asteroid collision one could be much more confident in the 20% estimate. As the latter is based on empirical evidence while the former is inference based and therefore error prone.

What I am saying is that I believe that SI is probably the top charity right now but that it is not as far ahead of other causes as some people here seem to think. I don't think that the evidence allows anyone to claim that trying to mitigate risks from AI is the best one could do and be highly confident about it. I think that it is currently the leading cause, but only slightly. And I am highly skeptical about using the expected value of a galactic civilization to claim otherwise.

Comment author: Rain 13 April 2012 05:19:23PM *  1 point [-]

I believe that SI is probably the top charity right now

I think that it is currently the leading cause

Charitable giving in the US in 2010: ~$290,890,000,000

SI's annual budget for 2010: ~$500,000

US Peace Corps volunteers in 2010 (3 years of service in a foreign country for sustenance wages): ~8,655

SI volunteers in 2010 (work from home or California hot spots): like 5?

Comment author: XiXiDu 13 April 2012 05:58:05PM *  1 point [-]

Charitable giving in the US in 2010: ~$290,890,000,000

SI's annual budget for 2010: ~$500,000

I am not sure what you are trying to tell me by those numbers. I think that there are a few valid criticisms regarding SI as an organization. It is also not clear that they could usefully spend more than ~$500,000 at this time.

In other words, even if risks from AI was the by far (not just slightly) most important cause, it is not clear that contributing money to SI is better than withholding funds from it it at this point.

If for example they can't usefully spend more money at this point, and there is nothing medium probable that you yourself can do against AI risk right now, then you should move on to the next most important cause that needs funding and support it instead.

Comment author: Rain 13 April 2012 07:51:34PM *  2 points [-]
  1. You think SI is "probably the top charity right now".
  2. SI is smaller than the rounding error in US charitable giving.
  3. You think they might have more than enough money

Those don't add up.

Comment author: Rain 13 April 2012 06:33:26PM -1 points [-]

I am not sure what you are trying to tell me by those numbers.

I think it's funny.

Comment author: thomblake 13 April 2012 08:01:34PM 0 points [-]

I think you misread "top charity" as "biggest charity" instead of "most important charity".

Comment author: Rain 13 April 2012 08:26:09PM 0 points [-]

No, I didn't.