snarles comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
As things stand, there is no guarantee that SIAI will get to make a difference, just as you have no guarantee that you will be alive in a week's time. The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way? If you don't even think unfriendly AI is an issue, that's one sort of discussion, a back-to-basics discussion. But if you do agree it's a potentially terminal problem, then who else is there? Everyone else in AI is a dilettante on this question; AI ethics is always a problem to be solved swiftly and in passing, a distraction from the more exciting business of making machines that can think. SIAI perceive the true seriousness of the issue, and at least have a sensible plan of attack, even if they are woefully underresourced when it comes to making it happen.
I suspect that in fact you're playing devil's-advocate a bit, trying to encourage the articulation of a new and better argument in favor of SIAI, but the sort of argument you want doesn't work. SIAI can of course guarantee that there will continue to be Singularity summits and visiting fellows, and it is reasonable to think that informed people discussing the issue make it more likely to turn out for the best, but they simply cannot guarantee that theoretically and pragmatically they will be ready in time. Perhaps I can put it this way: SIAI getting on with the job is not sufficient to guarantee a friendly Singularity, but for such an outcome to be anything but blind luck, it is necessary that someone take responsibility, and no-one else comes close to doing that.
I have to admit that I should have read the "Brief Introduction" link. That answered a lot of my objections.
In the end all I can say is that I got a misleading idea about the aspirations of SIAI, and that this was my fault. With this better understanding of the goals of SIAI, though, (which are implied to be limited to the mitigation of accidents caused by commercially developed AIs) I have to say that I remain unconvinced that FAI is a high-priority matter. I am particularly unimpressed by Yudkowski's cynical opinion of their motivations behind AAAI's dismissal of singularity worries in their panel report. (http://www.aaai.org/Organization/Panel/panel-note.pdf).
Since the evaluation of AI risks depends on the plausibility of AI disaster, (which would have to INCLUDE political and economic factors), I would have to wait until SIAI releases those reports to even consider accidental AI disaster a credible threat. (I am more worried about AIs intentionally designed for aggressive purposes, but it doesn't seem like SIAI can do much about that type of threat.)
Where did he respond to that?
I was just looking for the link:
http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/197s
"As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here."