http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
Very surprised none has linked to this yet:
TL;DR: AI is a very underfunded existential risk.
Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.
This needs unpacking of "deal with". A FAI is still capable of optimizing a "hopeless" situation better than humans, so if you focus on optimizing and not satisficing, it doesn't matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it's a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.
Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.
This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.
Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the so... (read more)