Or until the supply of low-skill workers depress the remaining low-skill wage beneath minimum wage/outsourcing. I think that we are eliminating a larger proportion of low-skill jobs per year than we ever have before, but I agree that the retraining and regulation issues you pointed out are significant.
Yeah, exactly. Especially if you take Cowen's view that science requires increasing marginal effort.
"There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying."
Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.
"We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply."
As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.
"The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and...just moves atoms around into whatever molecular structures or large-scale structures it wants....The human species would end up disassembled for spare atoms"
I also think you overestimate the ease of fooming. Computers are already helping us design themselves (see http://www.qwantz.com/index.php?comic=2406), and even a 300 IQ AI will be starting from the human knowledge base and competing with microbes for chemical energy at the nano scale and humans for energy at the macro scale. I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.
The quine requirement seems to me to introduce non-productive complexity. If file reading is disallowed, why not just pass the program its own source code as well as its opponent's?
I think Eliezer's "We have never interacted with the paperclip maximizer before, and will never interact with it again" was intended to preclude credible binding.
I'll reply two years later: Light drinking during pregnancy is associated with children with fewer behavioral and cognitive problems. This is probably a result of the correlation between moderate alcohol consumption and iq and education, but it's interesting nonetheless.
Steven Brams has devised some fair division algorithms that don't require good will: see his surplus procedure ( http://en.wikipedia.org/wiki/Surplus_procedure ) and his earlier adjusted winner procedure ( http://en.wikipedia.org/wiki/Adjusted_Winner_procedure ).
I just read the RSS feed for a Yudkowsky fix since he left Overcoming Bias.
It seems to me that a good model of the great recession should include as its predictions that male employment would be particularly hard-hit even among recessions (see https://docs.google.com/spreadsheet/ccc?key=0AofUzoVzQEE5dFo3dlo4Ui1zbU5kZ2ZENGo4UGRKbFE#gid=0). I think this probably favors ZMP (see http://marginalrevolution.com/marginalrevolution/2013/06/survey-evidence-for-zmp-workers.html). Edit: after normalizing the data with historical context, I'm not so sure.