Caledonian2 comments on Could Anything Be Right? - Less Wrong

20 Post author: Eliezer_Yudkowsky 18 July 2008 07:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Caledonian2 19 July 2008 11:05:20AM -1 points [-]

Let's see if I can't anticipate what Eliezer is reacting to... and post a link to ni.codem.us in the process.

If I am to accept a proposition, I am going to ask WHY.

And that is precisely why the entire "we'll just program the AI to value the things we value" schtick isn't going to work. If the AI is going to be flexible enough to be a functional superintelligence, it's going to be able to question and override built-in preferences.

Humans may wish to rid themselves of preferences and desires they find objectionable, but there's really nothing we can do about it. An AI has a good chance of being able to - carefully, within limits - redesign itself. Ridding itself of imperatives is probably going to be relatively easy. And isn't the whole point of the whole Singularity concept that technological development feeds on itself? Self-improving intelligence requires criteria to judge what improvement means, and a sufficiently bright intelligence is going to be able to figure them out on its own.

ni.codem.us, which I believe was established by Nick Tarleton, permits discussions between members that are incompatible with the posting rules, and additionally serves as a hedge against deletion or prejudicial editing. If you want to be sure your comment will say what you argued for, instead of what it's been edited to say, placing it there is probably a good idea.