V_V comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread.

Comment author: XiXiDu 22 June 2014 09:20:19AM *  1 point [-]

Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!"

We became better at constructing nuclear power plants, and nuclear bombs became cleaner. What critics are saying is that as AI advances, our control over it advances as well. In other words, the better AI becomes, the better we become at making AI work as expected. Because if AI became increasingly unreliable as its power grew, AI would cease to be a commercially viable product.

Comment author: TheAncientGeek 22 June 2014 10:10:35AM 2 points [-]

That's one of the standard responses to the MIRI argument, but not the same as the Artificial Philosopher response. I call it the SIRI versus MIRI response.

Comment author: Squark 22 June 2014 01:50:37PM 0 points [-]

We became better at constructing nuclear power plants, and nuclear bombs became cleaner.

That would be small comfort if WWIII erupted triggering a nuclear winter.

...AI would cease to be a commercially viable product.

A doomsday device doesn't have to be a commercially viable product. It just has to be used, once.

Comment author: TheAncientGeek 22 June 2014 03:24:52PM *  2 points [-]

Unless you can show it is reasonably likely that SIRI will take over the world, that is a Pascal's mugging.

Comment author: Squark 22 June 2014 03:51:13PM 3 points [-]

I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI's writing and I don't see much point in repeating the arguments here. Regarding Pascal's mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I'd say has low probability.