The Al Gore hypocrisy claim is misleading. Global warming changes the equilibrium sea level, but it takes many centuries to reach that equilibrium (glaciers can't melt instantly, etc). So climate change activists like to say that there will be sea level rises of hundreds of feet given certain emissions pathways, but neglect to mention that this won't happen in the 21st century. So there's no contradiction between buying oceanfront property only slightly above sea level and claiming that there will be large eventual sea level increases from global warming.
The thing to critique would be the misleading rhetoric that gives the impression (by mentioning that the carbon emissions by such and such a date will be enough to trigger sea level rises, but not mentioning the much longer lag until those rises fully occur) that the sea level rises will happen mostly this century.
Regarding Hughes' point, even if one thinks that an activity has harmful effects, that doesn't mean that a campaign to ban it won't do more harm than good. That would essentially be making bitter enemies of several of the groups (AI academia and industry) with the greatest potential to reduce risk, and discredit the whole idea of safety measures. Far better to develop better knowledge and academic analysis around the issues, or to mobilize resources towards positive safety measures.
Regarding your quoted comment, it seems crazy. The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
"Ticking time bomb cases" are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It's dangerous to use such hypotheticals as a way to justify legalization of abuse in realistic cases. No one can expect an act of violence to "disable Skynet" (if such a thing was known to exist, it would be too late anyway), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
Back in July I've written this as a response to Hughes' comment:
Keep your friends close...maybe they just want to keep the AI crowd as close together as possible. Making enemies wouldn't be a smart idea either, as the 'K-type S^' subgroup would likely retreat from further information disclosure. Making friends with them might be the best idea.
An explanation of the rather calm stance regarding a potential giga-death or living hell event would be to keep a low profile until acquiring more power.
I'm aware of that argument and also the other things you men...
[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.