TheOtherDave comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (369)
As long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly).
If you don't mind I would like you to state some numerical probability estimates:
I'd also like you to tackle some problems I see regarding the SIAI in its current form:
Transparency
How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals?
Agenda and Progress
The current agenda seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals?
As multifoliaterose implied here, at the moment the task to recognize humans as distinguished beings already seems to be too broad a problem to tackle directly. Might it be more effective, at this point, to concentrate on supporting other causes leading towards the general goal of AI associated existential risk mitigation?
Third Party Review
Without being an expert and without any peer review, how sure can you be about the given premises (AI going FOOM etc.) and the effectiveness of their current agenda?
Also what conclusion should one draw from the fact that at least 2 people who have been working for the SIAI, or have been in close contact with it, do disagree with some of the stronger claims. Robin Hanson seems not to be convinced that donating to the SIAI is an effective way to mitigate risks from AI? Ben Goertzel does not believe into the scary idea. And Katja Grace thinks AI is no big threat.
More
My own estimations
Therefore that a donation to the SIAI does pay off: 0.0000003%
If you're going to do this sort of explicit decomposition at all, it's probably also worth thinking explicitly about the expected value of a donation. That is: how much does your .0001 estimate of SIAI's chance of preventing a humanity-destroying AI go up or down based on an N$ change in its annual revenue?
Thanks, you are right. I'd actually do a lot more but I feel I am not yet ready to tackle this topic mathematically. I only started getting into math in 2009. I asked several times for an analysis with input variables I could use to come up with my own estimations of the expected value of a donation to the SIAI. I asked people who are convinced of the SIAI to provide a decision procedure on how they were convinced. I asked them to lay it open to public inspection so people could reassess the procedure and calculations to compute their own conclusion. In response they asked me to do so myself. I do not take it amiss, they do not have to convince me. I am not able to do so yet. But while learning math I try to encourage other people to think about it.
I feel that this deserves a direct answer. I think it is not just about money. The question would be, what would they do with it, would they actually hire experts? I will assume the best-case scenario here.
If the SIAI would be able to obtain a billion dollars I'd estimate the chance of the SIAI to prevent a FOOMing uFAI 10%.