Lapsed_Lurker comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Mestroyer 06 June 2012 02:50:14AM *  2 points [-]

Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.

Eliezer Yudkowsky thinks that a superintelligence that would answer a question would have to have a question-answering utility function making it decide to answer the question, or to pick paths that would lead to getting the answer to the question and answer it.

Says Allison: All digital logic is made of NOR gates!

Says Bruce: Nonsense, it's all made of NAND gates!

Allison: Look, A NAND B is really just ((A NOR A) NOR (B NOR B)) NOR ((A NOR A) NOR (B NOR B))

Bruce: Look, A NOR B is really just ((A NAND A) NAND (B NAND B)) NAND ((A NAND A) NAND (B NAND B))

(Edited because my lines of text got run together)

Edited again: I'm not trying to say either is a workable path to AI-completeness, just that showing that you can make some category of device X classified by ultimate function, and ignoring internal workings, out of devices of category Y, doesn't mean that Xs have to be made out of Ys

Comment author: Lapsed_Lurker 20 August 2012 08:58:47AM *  1 point [-]

Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.

Isn't 'listing by rank' 'making a (value) judgement'?