yli comments on Genies and Wishes in the context of computer science - Less Wrong

15 Post author: private_messaging 30 August 2013 12:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread.

Comment author: yli 31 August 2013 07:31:38PM 4 points [-]

When people talk about the command "maximize paperclip production" leading into the AI tiling the universe with paperclips, I interpret it to mean a scenario where first a programmer comes up with a shoddy formalization of paperclip maximization that he thinks is safe but actually isn't, and then writes that formalization into the AI. So at no point does the AI actually have to try and interpret a natural language command. Genie analogies are definitely confusing and bad to use here because genies do take commands in english.

Comment author: David_Gerard 31 August 2013 08:43:56PM *  1 point [-]

Indeed. I would think (as someone who knows nothing of AI beyond following LW for a few years) that the likely AI risk is something that doesn't think like a human at all, rather than something that is so close to a human in its powers of understanding that it could understand a sentence well enough to misconstrue it in a manner that would be considered malicious in a human.

Comment author: private_messaging 31 August 2013 08:58:28PM *  2 points [-]

There's also the thing that you'd only get this dangerous AI that you can't use for anything by bolting together a bunch of magical technologies to handicap something genuinely useful, much like you obtain an unusable outcome pump by attaching a fictional through wall 3D scanner to a place where two dangling wires that you have to touch together after your mother is saved would have worked just fine.

Comment author: David_Gerard 31 August 2013 09:21:33PM 0 points [-]

The genie post is sort of useful as a musing on philosophy and the inexactitude of words, but is still ridiculous as a threat model.

Comment author: private_messaging 31 August 2013 10:11:10PM *  0 points [-]

, I interpret it to mean a scenario where first a programmer comes up with a shoddy formalization of paperclip maximization that he thinks is safe but actually isn't, and then writes that formalization into the AI.

Well, you'd normally define a paperclip counter function that takes as it's input the state of some really shoddy simulator of Newtonian physics and material science, and then use some "AI" optimization software to find what sort of actions within this simulator produce simulated paperclips from the simulated spool of simulated steel wire with minimum use of electricity and expensive machinery. You also have some viewer for that simulator.

You need to define some context to define paperclip maximization in. Easy way to define a paperclip is as a piece of wire bent in a specific shape. Easy way to define a wire is just to have it as some abstract object with specific material properties, which you just have an endless supply of, coming out of a black box.