Kaj_Sotala comments on Desired articles on AI risk? - Less Wrong

13 Post author: lukeprog 02 November 2012 05:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 02 November 2012 04:51:56PM 7 points [-]

From David Chalmers' paper:

We might call this assumption a proportionality thesis: it holds that increases in intelligence (or increases of a certain sort) always lead to proportionate increases in the capacity to design intelligent systems. Perhaps the most promising way for an opponent to resist is to suggest that this thesis may fail. It might fail because here are upper limits in intelligence space, as with resistance to the last premise. It might fail because there are points of diminishing returns: perhaps beyond a certain point, a 10% increase in intelligence yields only a 5% increase at the next generation, which yields only a 2.5% increase at the next generation, and so on. It might fail because intelligence does not correlate well with design capacity: systems that are more intelligent need not be better designers. I will return to resistance of these sorts in section 4, under “structural obstacles”.

Comment author: lukeprog 02 November 2012 05:45:29PM 5 points [-]

Also note that Chalmers (2010) says that perhaps "the most promising way to resist" the argument for intelligence explosion is to suggest that the proportionality thesis may fail. Given this, Chalmers (2012) expresses "a mild disappointment" that of the 27 authors who commented on Chalmers (2010) for a special issue of Journal of Consciousness Studies, none focused on the proportionality thesis.

Comment author: blogospheroid 03 November 2012 02:59:54AM 0 points [-]

Thank you! Kaj and Luke. I am reading the singularity reply essay by Chalmers right now.