Emile comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (369)
As long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly).
If you don't mind I would like you to state some numerical probability estimates:
I'd also like you to tackle some problems I see regarding the SIAI in its current form:
Transparency
How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals?
Agenda and Progress
The current agenda seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals?
As multifoliaterose implied here, at the moment the task to recognize humans as distinguished beings already seems to be too broad a problem to tackle directly. Might it be more effective, at this point, to concentrate on supporting other causes leading towards the general goal of AI associated existential risk mitigation?
Third Party Review
Without being an expert and without any peer review, how sure can you be about the given premises (AI going FOOM etc.) and the effectiveness of their current agenda?
Also what conclusion should one draw from the fact that at least 2 people who have been working for the SIAI, or have been in close contact with it, do disagree with some of the stronger claims. Robin Hanson seems not to be convinced that donating to the SIAI is an effective way to mitigate risks from AI? Ben Goertzel does not believe into the scary idea. And Katja Grace thinks AI is no big threat.
More
My own estimations
Therefore that a donation to the SIAI does pay off: 0.0000003%
This part is the one that seems the most different from my own probabilities:
So, do you think the default case is a friendly AI? Or at least innocuous AI? Or that friendly AI is easy enough so that whoever first makes a fooming AI will get the friendliness part right with no influence from the SIAI?
No, I do not believe that the default case is friendly AI. But I believe that AI going FOOM is, if possible at all, very hard to accomplish. Surely everyone agrees here. But at the moment I do not share the opinion that friendliness, that is to implement scope boundaries, is a very likely failure mode. I see it this way, if one can figure out how to create an AGI that FOOM's (no I do not think AGI implies FOOM) then you have a thorough comprehension of intelligence and its associated risks. I just don't see that a group of researchers (I don't believe a mere group is enough anyway) will be smart enough to create an AGI that does FOOM but somehow fail to limit its scope. Please consider reading this comment where I cover this topic in more detail. That is why I believe that only 5% of all AI's going FOOM will be an existential risk to all of humanity. That is my current estimation, I'll of course update on new evidence (e.g. arguments).