EHeller comments on The Robots, AI, and Unemployment Anti-FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (267)
"There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying."
Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.
"We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply."
As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.
"The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and...just moves atoms around into whatever molecular structures or large-scale structures it wants....The human species would end up disassembled for spare atoms"
I also think you overestimate the ease of fooming. Computers are already helping us design themselves (see http://www.qwantz.com/index.php?comic=2406), and even a 300 IQ AI will be starting from the human knowledge base and competing with microbes for chemical energy at the nano scale and humans for energy at the macro scale. I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.
I would estimate even longer- a lot of science's rate limiting steps involve simple routine work that is going to be hard to speed up. Think about the extreme cutting edge- how much could an IQ-300 AI speed up the process of physically building something like the LHC?
Could you give three examples? (I’m not trying to be a wise-ass, I actually thought about it and couldn’t find any solid ones.)
Have you spent much time working in labs? Its been my experience that most of the work is data collection, where the process you are collecting data on is the limiting factor. Honestly can't think of any lab I've been apart of where data collection was not the rate limiting step.
Here are the first examples that popped into my head:
Consider Lenski's work on E.coli. It took from 1988-2010 to get to 50k generations (and is going). The experimental design phase and data analysis here are minimal in length compared to the time it takes e.coli to grow and breed.
It took 3 years to go from the first potential top quark events on record (1992) to actual discovery (1995). This time was just waiting for enough events to build up (I'm ignoring the 20 years between prediction and first-events because maybe a super-intelligence could have somehow narrowed down the mass range to explore, I'm also ignoring the time required to actually build an accelerator, thats 3 years of just letting the machine run).
Depending on what you are looking for, timescales in NMR collection are weeks to months. If your signal is small, you might need dozens of these runs.
Also, anyone who has ever worked with a low temperature system can tell you that keeping the damn thing working is a huge time sink. So you could add 'necessary machine maintenance' to these sorts of tasks. Its not obvious to me that leak checking your cryonics setup to troubleshoot can be sped up much by higher IQ.
No, I did not, and it shows :-)
Thank you for the examples, I see your point. I can imagine ways 300-IQ AIs would accelerate some of these that sound plausible to me, but since I don’t really have direct experience that might not mean much.
That said, I notice that the bluej’s post mentioned the AI dominating scientific output, not necessarily increasing its rate by much. Of course, a single AI instance would not dominate science—as evidenced by the fact that the few ~200 IQ humans that existed didn’t claim a big part—but an AI architecture that can be easily replicated might. After all, at least as far as IQ is concerned, anyone who hires an IQ 140–160 scientist now would just use an IQ 300 AI instead.
Of course, science is not just IQ, and even if IBM’s Watson had IQ 300 right now and I doubt enough instances of it would be built in five years to replace all scientists simply due to hardware costs (not to mention licensing and patent wars). But then again I don’t have a very good feel for the relative cost of humans and hardware for things the size of Google, so I don’t have very high confidence either way. But certainly 20 to 30 years would change the landscape hugely.
Yeah, exactly. Especially if you take Cowen's view that science requires increasing marginal effort.