John_Maxwell_IV comments on A taxonomy of Oracle AIs - Less Wrong

13 Post author: lukeprog 08 March 2012 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 09 March 2012 07:05:55AM *  6 points [-]

It seems like the worst it could do is misunderstand your question and give you a recipe for gray goo when you really wanted a recipe for a cake. Bonus points if the gray goo recipe looks a lot like a cake recipe.

It seems to me that I often see people thinking about FAI assuming a best case scenario where all intelligent people are less wrong users who see friendliness as paramount, and discarding solutions that don't have above a 99.9% chance of succeeding. But really we want an entire stable of solutions, depending on how potential UFAI projects are going, right?

Comment author: Viliam_Bur 09 March 2012 09:29:23AM 12 points [-]

Bonus points if the gray goo recipe looks a lot like a cake recipe.

More bonus points if the recipe really generates a cake... which later with some probability turns into the gray goo.

Now you can have your cake and it will eat you too. :D

Comment author: Nisan 09 March 2012 05:57:16PM 1 point [-]

I don't believe that a gray goo recipe can look like a cake recipe. I believe there are recipes for disastrously harmful things that look like recipes for desirable things; but is a goal-less Question Answerer producing a deceitful recipe more likely than a human working alone accidentally producing one?

The problem of making the average user as prudent as a Less Wrong user seems much easier than FAI. Average users already know to take the results of Wolfram Alpha and Google with a grain of salt. People working on synthetic organisms and nuclear radiation already know to take precautions when doing anything for the first time.

Comment author: John_Maxwell_IV 09 March 2012 09:47:58PM 1 point [-]

My point about assuming the entire world were less wrong users is that there are teams, made up of people who are not less wrong users, who will develop UFAI if we wait long enough. So a quick and slightly dirty plan (like making this sort of potentially dangerous Oracle AI) may beat a slow and perfect one.

Comment author: Nisan 09 March 2012 11:15:43PM 1 point [-]

Oh! I see. That makes sense.