Giles comments on Risks from AI and Charitable Giving - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
Thanks - I've read the bullet points and it looks like a really good summary (apologies for skimming - I'll read it in more detail when I have time).
Just a few minor points:
Also if the standard is not "a worthwhile charity" but "the best charity", it would be worth adding a P10: no other charity provides higher expected marginal value. Meta-level charities that focus on building the rational altruism movement are at least a candidate here.
I wanted to show that even if you assign a high probability to the possibility of risks from AI due to recursive self-improvement, it is still questionable if SIAI is the right choice or if now is the time to act.
As I wrote at the top, it was a rather quick write-up and I plan to improve it. I can't get myself to work on something like this for very long. It's stupid, I know. But I can try to improve things incrementally. Thanks for your feedback.
That's a good point. SIAI as an organisation that makes people aware of the risk. But from my interview series it seemed like that a lot of AI researchers are aware of it to the point of being bothered.
It isn't optimal. It is kind of hard to talk about premises that appear to be the same from a superficially point of view. But from a probabilistic point of view it is important to separate them into distinct parts to make clear that there are things that need to be true in conjunction.
That problem is incredibly mathy and given my current level of education I am happy that people like Holden Karnofsky tackle that problem. The problem being that we get into the realm of Pascal's mugging here where vast utilities outweigh tiny probabilities. Large error bars may render such choices moot. For more, see my post here.