Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bluej100 24 July 2013 07:45:16AM *  3 points [-]

It seems to me that a good model of the great recession should include as its predictions that male employment would be particularly hard-hit even among recessions (see https://docs.google.com/spreadsheet/ccc?key=0AofUzoVzQEE5dFo3dlo4Ui1zbU5kZ2ZENGo4UGRKbFE#gid=0). I think this probably favors ZMP (see http://marginalrevolution.com/marginalrevolution/2013/06/survey-evidence-for-zmp-workers.html). Edit: after normalizing the data with historical context, I'm not so sure.

Comment author: Eliezer_Yudkowsky 24 July 2013 06:37:41AM 6 points [-]

As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.

Plenty of low-wage jobs have been automated away by machines over the last four centuries. You don't end up permanently irrevocably unemployed until all the work you can do has been automated away.

Comment author: bluej100 24 July 2013 06:45:56AM 3 points [-]

Or until the supply of low-skill workers depress the remaining low-skill wage beneath minimum wage/outsourcing. I think that we are eliminating a larger proportion of low-skill jobs per year than we ever have before, but I agree that the retraining and regulation issues you pointed out are significant.

Comment author: EHeller 24 July 2013 06:25:00AM 4 points [-]

I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.

I would estimate even longer- a lot of science's rate limiting steps involve simple routine work that is going to be hard to speed up. Think about the extreme cutting edge- how much could an IQ-300 AI speed up the process of physically building something like the LHC?

Comment author: bluej100 24 July 2013 06:29:10AM 2 points [-]

Yeah, exactly. Especially if you take Cowen's view that science requires increasing marginal effort.

Comment author: bluej100 24 July 2013 06:17:02AM 4 points [-]

"There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying."

Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.

"We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply."

As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.

"The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and...just moves atoms around into whatever molecular structures or large-scale structures it wants....The human species would end up disassembled for spare atoms"

I also think you overestimate the ease of fooming. Computers are already helping us design themselves (see http://www.qwantz.com/index.php?comic=2406), and even a 300 IQ AI will be starting from the human knowledge base and competing with microbes for chemical energy at the nano scale and humans for energy at the macro scale. I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.

Comment author: bluej100 07 June 2013 06:37:25PM 13 points [-]

The quine requirement seems to me to introduce non-productive complexity. If file reading is disallowed, why not just pass the program its own source code as well as its opponent's?

Comment author: TGGP4 03 September 2008 11:03:21PM 4 points [-]

How might we and the paperclip-maximizer credibly bind ourselves to cooperation? Seems like it would be difficult dealing with such an alien mind.

Comment author: bluej100 07 January 2013 10:25:24PM 4 points [-]

I think Eliezer's "We have never interacted with the paperclip maximizer before, and will never interact with it again" was intended to preclude credible binding.

Comment author: Hook 17 March 2010 08:24:50PM 3 points [-]

Another test:

Could smoking during pregnancy have a benefit? Could drinking during pregnancy have a benefit? It's not necessary that someone know what the benefit could be, just acknowledge the nicotine and alcohol are drugs that have complex effects on the body.

As for smoking, it's definitely a bad idea, but it reduces the chances of pre-eclampsia. I don't know of any benefit for alcohol.

Comment author: bluej100 08 May 2012 10:39:20PM 2 points [-]

I'll reply two years later: Light drinking during pregnancy is associated with children with fewer behavioral and cognitive problems. This is probably a result of the correlation between moderate alcohol consumption and iq and education, but it's interesting nonetheless.

Comment author: NancyLebovitz 25 April 2010 01:31:49PM 2 points [-]

Fairness and housework may not be best handled as an enumeration problem. I know a family (two adults, one child) which started by listing the necessary housework, and then each listing which things they liked doing, which they disliked, and which they were neutral about, and came to a low-stress agreement.

Admittedly, this takes good will, honesty, and no one in the group who's too compulsive about doing or not doing housework.

Comment author: bluej100 28 April 2010 12:42:55AM 1 point [-]

Steven Brams has devised some fair division algorithms that don't require good will: see his surplus procedure ( http://en.wikipedia.org/wiki/Surplus_procedure ) and his earlier adjusted winner procedure ( http://en.wikipedia.org/wiki/Adjusted_Winner_procedure ).

Comment author: bluej100 19 April 2010 01:23:46AM 5 points [-]

I just read the RSS feed for a Yudkowsky fix since he left Overcoming Bias.