TheAncientGeek comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 Post author: Stuart_Armstrong 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (411)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 May 2014 02:57:05PM *  0 points [-]

Non AI systems uncontroversially require explicit coding. How would you characterise .AI systems, then?

Comment author: RichardKennaway 14 May 2014 09:05:51AM 0 points [-]

Non AI systems uncontroversially require explicit coding. How would you characterise .AI systems, then?

XiXiDu's characterisation seems suitable enough: programs able to perform tasks normally requiring human intelligence. One might add "or superhuman intelligence", as long as one is not simply wishing for magic there. This is orthogonal to the question of how you tell such a system what you want it to do.

Comment author: TheAncientGeek 14 May 2014 09:16:42AM 0 points [-]

Indeed. But there is a how to-do-it definition of .AI, and it is kind of not aboutt explicit coding, for instance, if a student takes an .AI course as part of a degree, they are not taught explicit coding all over again. They are taught about learning algorithms, neural networks, etc.

Comment author: [deleted] 13 May 2014 05:29:02PM 0 points [-]

They definitely require some amount of explicit coding of their values. You can try to reduce the burden of such explicit value-loading through various indirect means, such as value learning, indirect normativity, extrapolated volition, or even reinforcement learning (though that's the most primitive and dangerous form of value-loading). You cannot, however, dodge the bullet.

Comment author: TheAncientGeek 13 May 2014 05:33:30PM -1 points [-]

Because?