paper-machine comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 13 July 2012 01:20:37AM 4 points [-]

How would you even pose the question of AI risk to someone in the eighteenth century?

I'm trying to imagine what comes out the other end of Newton's chronophone, but it sounds very much like "You should think really hard about how to prevent the creation of man-made gods."

Comment author: Vladimir_Nesov 13 July 2012 01:27:24AM *  3 points [-]

I don't think it's plausible that people could stumble on the problem statement 300 years ago, but within that hypothetical, it wouldn't have been too early.

Comment author: JaneQ 13 July 2012 02:04:02PM 2 points [-]

It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.

On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of "utility function" in the context of an AI? Is it a computable mathematical function over a model, such that the 'intelligence' component computes the action that results in maximum of that function taken over the world state resulting from the action?

Comment author: johnlawrenceaspden 16 July 2012 11:30:09AM 0 points [-]

Surely "Effort planning is a key to success"?

Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more "This is what we'd like, go away and do it and get back to us when it's done..." than "We have this Gantt chart...".

Comment author: johnlawrenceaspden 16 July 2012 11:34:43AM 1 point [-]

How about: "Eventually your machines will be so powerful they can grant wishes. But remember that they are not benevolent. What will you wish for when you can make a wish-machine?"

Comment author: summerstay 18 July 2012 02:06:22PM 1 point [-]

There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.

Comment author: [deleted] 18 July 2012 05:52:04PM *  0 points [-]

Your definition of "explicit" must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens.

In any case, based on the Monadology, I don't believe Leibniz would consider the creation of a godlike entity to be theologically possible.

Comment author: [deleted] 15 July 2012 12:28:58PM 0 points [-]

Oh, wait... The tale of the Tower of Babel was told via chronophone by people from the future right before succumbing to uFAI!